diff --git "a/2019.jsonl" "b/2019.jsonl" new file mode 100644--- /dev/null +++ "b/2019.jsonl" @@ -0,0 +1,710 @@ +{"year":"2019","title":"% 0 Conference Proceedings% TA Large DataBase of Hypernymy Relations Extracted from the Web.% A Seitner, Julian% A Bizer, Christian% A Eckert, Kai","authors":["A Ponzetto, S Paolo"],"snippet":"… for many word understanding applications. We present a publicly available database containing more than 400 million hypernymy relations we extracted from the CommonCrawl web corpus. We describe the infrastructure we …","url":["https://www.aclweb.org/anthology/papers/L/L16/L16-1056.endf"]} +{"year":"2019","title":"A 6-month Analysis of Factors Impacting Web Browsing Quality for QoE Prediction","authors":["A Saverimoutou, B Mathieu, S Vaton - Computer Networks, 2019"],"snippet":"… Fourth Party [15] instruments the Mozilla-Firefox browser and Web Xray [16] is a PhantomJS based tool for measuring HTTP traffic. XRay [17] and AdFisher [18] run automated personalization detection experiments and …","url":["https://www.sciencedirect.com/science/article/pii/S1389128619307546"]} +{"year":"2019","title":"A Bilingual Adversarial Autoencoder for Unsupervised Bilingual Lexicon Induction","authors":["X Bai, H Cao, K Chen, T Zhao - IEEE/ACM Transactions on Audio, Speech, and …, 2019"],"snippet":"… [29]. This dataset consists of gold dictionaries and 300-dimensional CBOW5 embeddings trained on WacKy crawling corpora (English, Italian, German), Common Crawl (Finish) and WMT News Crawl (Spanish). We report the results 2We set k = 10 …","url":["https://ieeexplore.ieee.org/abstract/document/8754809/"]} +{"year":"2019","title":"A Combined Approach to Automatic Taxonomy Extraction","authors":["S Pecar, M Simko"],"snippet":"… [10] presented publicly available database of hypernym relations called WebIsA. This database was created using Hearst-like patterns on CommonCrawl web corpus. They extracted more than 400 million hypernymy relations …","url":["https://ieeexplore.ieee.org/iel7/8859030/8864801/08864911.pdf"]} +{"year":"2019","title":"A Common Semantic Space for Monolingual and Cross-Lingual Meta-Embeddings","authors":["I García Ferrero - 2019","I García, R Agerri, G Rigau - arXiv preprint arXiv:2001.06381, 2020"],"snippet":"… From GloVe (GV) [34], the Common Crawl vectors (600 billion words) … The WS353 dataset is di- vided in two subsets [1]. In this section all the meta-embeddings have been mapped to the vector space of the English FastText (Common Crawl, 600B tokens) …","url":["https://addi.ehu.eus/bitstream/handle/10810/36183/MAL-Iker_Garcia.pdf?sequence=1&isAllowed=y","https://arxiv.org/pdf/2001.06381"]} +{"year":"2019","title":"A COMPARATIVE STUDY ON END-TO-END SPEECH TO TEXT TRANSLATION","authors":["P Bahar, T Bieschke, H Ney"],"snippet":"… We select our checkpoints based on the dev set. For the MT training, we use the TED, OpenSubtitles2018, Europarl, ParaCrawl, CommonCrawl, News Commentary, and Rapid corpora resulting in 32M sentence pairs after filtering noisy samples …","url":["https://www-i6.informatik.rwth-aachen.de/publications/download/1121/BaharParniaBieschkeTobiasNeyHermann--Acomparativestudyonend-to-endspeechtotexttranslation--2019.pdf"]} +{"year":"2019","title":"A Comparison of Context-sensitive Models for Lexical Substitution","authors":["AG Soler, A Cocos, M Apidianaki, C Callison-Burch"],"snippet":"… to two context-insensitive baselines that solely rely on the target-to-substitute similarity of standard, pre-trained word embeddings: 300-dimensional GloVe vectors (Pennington et al., 2014)5 and 300-dimensional FastText vectors …","url":["http://www.cis.upenn.edu/~ccb/publications/comparison-of-context-sensitive-models-for-lexical-substitution.pdf"]} +{"year":"2019","title":"A Comparison of Neural Document Classification Models","authors":["M Nitsche, S Halbritter"],"snippet":"… Bojanowski et al. (2016). It is available for 294 languages. An updated version, trained on Common Crawl in addition to Wikipedia and with adapted parameters is available for 157 languages (Grave et al., 2018). These are the …","url":["https://users.informatik.haw-hamburg.de/~ubicomp/projekte/master2019-proj/nitsche-halbritter2.pdf"]} +{"year":"2019","title":"A Comparison on Fine-grained Pre-trained Embeddings for the WMT19Chinese-English News Translation Task","authors":["Z Li, L Specia - Proceedings of the Fourth Conference on Machine …, 2019"],"snippet":"… In addition, the Common Crawl Corpus from WMT is used as monolingual data to pre-train the embeddings … We trained the embeddings on the Common Crawl Corpus provided by WMT19 and fine-tuned them on the task data when training the RNN …","url":["https://www.aclweb.org/anthology/W19-5324"]} +{"year":"2019","title":"A Contextualized Word Representation Approach for Irony Detection","authors":["L Garcıa, D Moctezuma, V Muniz - Proceedings of the Iberian Languages Evaluation …, 2019"],"snippet":"… 2.2 Word Embeddings We use the ELMo pre-trained word embeddings provided by [4], which were trained with a corpus of 20 million-words randomly sampled from the raw text released by the CoNLL 2018 shared …","url":["http://ceur-ws.org/Vol-2421/IroSvA_paper_5.pdf"]} +{"year":"2019","title":"A Dataset for Content Error Detection in Web Archives","authors":["J Kiesel, F Hubricht, B Stein, M Potthast"],"snippet":"… The Webis Web Archive 2017 [9] contains 10,000 web pages sampled from the Common Crawl [11] in a way which ensured that both well-known and less … January 2017 Common Crawl Archive, 2017. http: //commoncrawl.org …","url":["https://webis.de/downloads/publications/papers/stein_2019e.pdf"]} +{"year":"2019","title":"A Deep Dive into Supervised Extractive and Abstractive Summarization from Text","authors":["M Dey, D Das - Data Visualization and Knowledge Engineering, 2020"],"snippet":"… 7.1. As mentioned in the algorithm, estimated frequency p(w) of every words have been found from datasets (enwiki, poliblogs, commoncrawl, text8) [1]. The parameter “a” for our task is fixed at \\(3 * 10^{-3}\\). The vector of all …","url":["https://link.springer.com/chapter/10.1007/978-3-030-25797-2_5"]} +{"year":"2019","title":"A Deep Learning Approach for Identification of Confusion in Unstructured Crowdsourced Annotations","authors":["R Gardner, M Varma, C Zhu"],"snippet":"… improvements in model performance. We also compared the VQR binary classification model based on GLoVe embeddings with a model based on FastText embeddings trained on Common Crawl (3; 14). Since the FastText …","url":["https://pdfs.semanticscholar.org/ad70/7fc8b36a8f3daf8742cf92fcf099de434cec.pdf"]} +{"year":"2019","title":"A Dynamic Evolutionary Framework for Timeline Generation based on Distributed Representations","authors":["D Liang, G Wang, J Nie - arXiv preprint arXiv:1905.05550, 2019"],"snippet":"… To learn the distributed representations, we use pre-trained word vectors2, trained on Common Crawl and Wikipedia by fastText tookit [2]. Furthermore, each v(q) is only embedded by the name of the topic q as a experimental control …","url":["https://arxiv.org/pdf/1905.05550"]} +{"year":"2019","title":"A Framework to Estimate the Nutritional Value of Food in Real Time Using Deep Learning Techniques","authors":["R Yunus, O Arif, H Afzal, MF Amjad, H Abbas… - IEEE Access, 2019"],"snippet":"… First is Common-Crawl, which is an archive hosted on an Amazon S3 bucket … is more relevant as it return pages corresponding to precise labels while the text from Common-Crawl is more generic. The raw text data obtained …","url":["https://ieeexplore.ieee.org/iel7/6287639/8600701/08590712.pdf"]} +{"year":"2019","title":"A Language Invariant Neural Method for TimeML Event Detection","authors":["S Prabhu, P Goel, A Debnath, M Shrivastava"],"snippet":"… The CNN uses 40 filters with a window size of 3. For our contextual word embeddings, we use fastText embeddings for English (Bojanowski et al., 2017) which are pretrained on commonCrawl and the Wikipedia corpus. FastText …","url":["https://www.researchgate.net/profile/Pranav_Goel/publication/337387464_A_Language_Invariant_Neural_Method_for_TimeML_Event_Detection/links/5dd4c5ec299bf11ec8629470/A-Language-Invariant-Neural-Method-for-TimeML-Event-Detection.pdf"]} +{"year":"2019","title":"A Massive Collection of Cross-Lingual Web-Document Pairs","authors":["A El-Kishky, V Chaudhary, F Guzman, P Koehn - arXiv preprint arXiv:1911.06154, 2019"],"snippet":"… Other works (Smith et al., 2013) have mined Common Crawl for bitexts for machine … and restrictions, we mined 54 million aligned documents across 12 Common Crawl snapshots … 2: NMT performance on comparable directions …","url":["https://arxiv.org/pdf/1911.06154"]} +{"year":"2019","title":"A Multi-Task Approach for Disentangling Syntax and Semantics in Sentence Representations","authors":["S Wiseman, K Gimpel, Q Tang, M Chen"],"snippet":"04/02/19 - We propose a generative model for a sentence that uses two latent variables, with one intended to represent the syntax of the sent...","url":["https://deepai.org/publication/a-multi-task-approach-for-disentangling-syntax-and-semantics-in-sentence-representations"]} +{"year":"2019","title":"A New Corpus for Low-Resourced Sindhi Language with Word Embeddings","authors":["W Ali, J Kumar, J Lu, Z Xu - arXiv preprint arXiv:1911.12579, 2019"],"snippet":"… 3Available online at https://rdrr.io/cran/wordspace/man/WordSim353.html 4We denote Sindhi word representations as (SdfastText) recently revealed by fastText, available at (https://fasttext.cc/docs/en/crawl-vectors.html) trained …","url":["https://arxiv.org/pdf/1911.12579"]} +{"year":"2019","title":"A New Hybrid Ensemble Feature Selection Framework for Machine Learning-based Phishing Detection System","authors":["KL Chiew, CL Tan, KS Wong, KSC Yong, WK Tiong - Information Sciences, 2019"],"snippet":"… June 2017. Specifically, we selected 5000 phishing webpages based on URLs from PhishTank 2 and OpenPhish 3 , and another 5000 legitimate webpages based on URLs from Alexa 4 and the Common Crawl 5 archive. The …","url":["https://www.sciencedirect.com/science/article/pii/S0020025519300763"]} +{"year":"2019","title":"A Question-Entailment Approach to Question Answering","authors":["AB Abacha, D Demner-Fushman - arXiv preprint arXiv:1901.08079, 2019"],"snippet":"… We use the pretrained common crawl version with 840B tokens and 300d vectors, which are not updated during training. 3.3 Logistic Regression Classifier In this feature-based approach, we use Logistic Regression to …","url":["https://arxiv.org/pdf/1901.08079"]} +{"year":"2019","title":"A Recurrent Deep Neural Network Model to measure Sentence Complexity for the Italian","authors":["D Schicchi"],"snippet":"… The authors have used FastText [3], a library for efficient learning of word representations and sentence classification, trained on Common Crawl [21] and Wikipedia to create a pre-trained word vector representation for …","url":["http://ceur-ws.org/Vol-2418/paper10.pdf"]} +{"year":"2019","title":"A Robust Abstractive System for Cross-Lingual Summarization","authors":["J Ouyang, B Song, K McKeown"],"snippet":"… about 23k sentences for Somali and Swahili and 51k for Tagalog); noisy, web-crawled parallel data (So- mali only, about 354k sentences); and synthetic, backtranslated parallel data created from monolingual sources including …","url":["http://www.cs.columbia.edu/~ouyangj/OuyangSongMcKeown2019.pdf"]} +{"year":"2019","title":"A Study of Neural Networks Models applied to Natural Language Inference","authors":["VG Noronha, JCP da Silva"],"snippet":"… word vectors of different size, in order to check whether the word space dimension plays an important role on the final results: – A 100d version trained on the Wikipedia 2014 + Gigaword 5 corpus, with 6B tokens; – The 300d …","url":["https://www.researchgate.net/profile/Joao_Silva45/publication/320711838_A_Study_of_Neural_Networks_Models_applied_to_Natural_Language_Inference/links/5b23a863458515270fcff1e1/A-Study-of-Neural-Networks-Models-applied-to-Natural-Language-Inference.pdf"]} +{"year":"2019","title":"A Survey of URL-based Phishing Detection","authors":["ES Aung, CT Zan, H YAMANA"],"snippet":"… 2017 [40] 98.76 98.60 98.93 98.76 99.91 Common Crawl PhishTank 1M 1M Balanced Path length, URL entropy, length ratio, '@' and '-' count, punctuation count, TLDs count, IP address, suspicious words count …","url":["https://db-event.jpn.org/deim2019/post/papers/201.pdf"]} +{"year":"2019","title":"A Survey on Document-level Machine Translation: Methods and Evaluation","authors":["S Maruf, F Saleh, G Haffari - arXiv preprint arXiv:1912.08494, 2019"],"snippet":"Page 1. A Survey on Document-level Machine Translation: Methods and Evaluation Sameen Maruf, Fahimeh Saleh, and Gholamreza Haffari Faculty of Information Technology, Monash University, Clayton VIC, Australia {firstname.lastname}@monash.edu …","url":["https://arxiv.org/pdf/1912.08494"]} +{"year":"2019","title":"A System to Monitor Cyberbullying based on Message Classification and Social Network Analysis","authors":["S Menini, G Moretti, M Corazza, E Cabrio, S Tonelli… - Proceedings of the Third …, 2019"],"snippet":"… timestep. We use English Fasttext embeddings1 trained on Common Crawl with a size of 300. Concerning hy- perparameters, our model uses no dropout and no batch normalization on the outputs of the hidden layer. Instead …","url":["https://www.aclweb.org/anthology/W19-3511"]} +{"year":"2019","title":"A Systematic Comparison Between SMT and NMT on Translating User-Generated Content","authors":["P Lohar, M Popovic, H Afli, A Way"],"snippet":"… Morever, as the Europarl corpus is a fix-domain and did not work well for our experiments, we plan to utilise other types of mix-domain parallel resource such as common crawl corpus9 … WASSA '12 (2012) 52–60 9 http://www.statmt …","url":["http://www.computing.dcu.ie/~away/PUBS/2019/A_Systematic_Comparison_Between_SMT_and_NMT_on_Translating_User_Generated_Content.pdf"]} +{"year":"2019","title":"A thesis submitted in conformity with the requirements for the degree of Doctor of Philosophy Graduate Department of Computer Science","authors":["N Naderi - 2019"],"snippet":"Page 1. COMPUTATIONAL ANALYSIS OF ARGUMENTS AND PERSUASIVE STRATEGIES IN POLITICAL DISCOURSE by Nona Naderi A thesis submitted in conformity with the requirements for the degree of Doctor of Philosophy …","url":["ftp://ftp.db.toronto.edu/public_html/public_html/pub/gh/Naderi-PhD-thesis-2019.pdf"]} +{"year":"2019","title":"A Vector Worth a Thousand Counts","authors":["DS Hain, R Jurowetzkiφ, T Buchmannψ, P Wolfψ"],"snippet":"… task convolutional neural network model trained on OntoNotes, with GloVe vectors (685k unique vectors with 300 dimensions) trained on Common Crawl. Given a patent abstract, spaCy predicts the meaning of each term in the document …","url":["https://pdfs.semanticscholar.org/69ba/b264607119c7928a9a25ea82823d9c346350.pdf"]} +{"year":"2019","title":"Abstract Text Summarization: A Low Resource Challenge","authors":["S Parida, P Motlicek - 2019"],"snippet":"… We propose an iterative data augmentation approach which uses synthetic data along with the real summarization data for the German language. To generate synthetic data, the Common Crawl (German) dataset is exploited, which covers different domains …","url":["https://infoscience.epfl.ch/record/270135"]} +{"year":"2019","title":"AC-Net: Assessing the Consistency of Description and Permission in Android Apps","authors":["Y Feng, L Chen, A Zheng, C Gao, Z Zheng - IEEE Access, 2019"],"snippet":"… Compared to the somewhat popular embeddings such as GloVe [22] (400 thousand word vectors trained on Wikipedia) and crawl [23] (2 million word vectors trained on Common Crawl), ours fully retains domain-specific characteristics of statements …","url":["https://ieeexplore.ieee.org/iel7/6287639/6514899/08694776.pdf"]} +{"year":"2019","title":"Acquiring Knowledge from Pre-trained Model to Neural Machine Translation","authors":["R Weng, H Yu, S Huang, S Cheng, W Luo - arXiv preprint arXiv:1912.01774, 2019"],"snippet":"… Following Song et al. (2019) , on the English and German, we use the monolingual data from WMT News Crawl. We select 50M sentence from year 2007 to 2017 for English and German respectively. Then, we choose 50M sentence from Common Crawl for Chinese …","url":["https://arxiv.org/pdf/1912.01774"]} +{"year":"2019","title":"Adapting Transformer-XL Techniques to QANet Architecture for SQuAD 2.0 Challenge","authors":["L Zhang"],"snippet":"… set data. • glove.840B.300dglove.840B.300d.txt: Pretrained GloVevectors. Theseare300dimensional embeddings trained on the CommonCrawl 840B corpus. • {word,char}_emb.json: Word and character embeddings. Only the …","url":["https://pdfs.semanticscholar.org/0760/248f62e5a313fb088cce37495ce79c9ba8a1.pdf"]} +{"year":"2019","title":"Adaptive Cross-Modal Few-Shot Learning","authors":["C Xing, N Rostamzadeh, BN Oreshkin, PO Pinheiro - arXiv preprint arXiv:1902.07104, 2019"],"snippet":"… category labels. GloVe is an unsupervised approach based on wordword co-occurrence statistics from large text corpora. We use the Common Crawl version trained on 840B tokens. The embeddings are of dimension 300. When …","url":["https://arxiv.org/pdf/1902.07104"]} +{"year":"2019","title":"Advanced Deep learning Methods and Applications in Open-domain Question Answering","authors":["MT Nguyễn - 2019"],"snippet":"Page 1. VIETNAM NATIONAL UNIVERSITY, HANOI UNIVERSITY OF ENGINEERING AND TECHNOLOGY Nguyen Minh Trang ADVANCED DEEP LEARNING METHODS AND APPLICATIONS IN OPEN-DOMAIN QUESTION ANSWERING MASTER THESIS …","url":["http://lib.uet.vnu.edu.vn/bitstream/123456789/1021/1/2.ToanVanLuanVan.pdf"]} +{"year":"2019","title":"Adversarial NLI: A New Benchmark for Natural Language Understanding","authors":["Y Nie, A Williams, E Dinan, M Bansal, J Weston… - arXiv preprint arXiv …, 2019"],"snippet":"… transfer. In addition to contexts from Wikipedia for Round 3, we also included contexts from the following domains: News (ex- tracted from Common Crawl), fiction (extracted from Mostafazadeh et al. 2016, Story Cloze, and Hill et al …","url":["https://arxiv.org/pdf/1910.14599"]} +{"year":"2019","title":"Adverse drug event detection from electronic health records using hierarchical recurrent neural networks with dual-level embedding","authors":["S Wunnava, X Qin, T Kakar, C Sen, EA Rundensteiner… - Drug Safety, 2019"],"snippet":"… compared the results from DLADE, which uses domainand task-specific MADE1.0 word embedding trained using wiki, and Pittsburgh EHR and PubMed articles (1,352,550 word vectors) [10, 21], with two systems that use …","url":["https://link.springer.com/article/10.1007/s40264-018-0765-9"]} +{"year":"2019","title":"AELA-DLSTMs: Attention-Enabled and Location-Aware Double LSTMs for Aspect-level Sentiment Classification","authors":["K Shuang, X Ren, Q Yang, R Li, J Loo - Neurocomputing, 2018"],"snippet":"Skip to main content …","url":["https://www.sciencedirect.com/science/article/pii/S0925231218315054"]} +{"year":"2019","title":"Aiding Intra-Text Representations with Visual Context for Multimodal Named Entity Recognition","authors":["O Arshad, I Gallo, S Nawaz, A Calefati - arXiv preprint arXiv:1904.01356, 2019"],"snippet":"… Page 5. B. Word embeddings We used 300D fasttext crawl embeddings. It contains 2 million word vectors trained with subword information on Common Crawl (600B tokens). However, we do not apply fine-tuning on these embeddings during the training stage …","url":["https://arxiv.org/pdf/1904.01356"]} +{"year":"2019","title":"Algorithmic Bias and the Biases of the Bias Catchers","authors":["D Rozado - arXiv preprint arXiv:1905.11985, 2019"],"snippet":"… This work systematically analyzed 3 popular word embedding methods: Word2vec (Skipgram) (4), Glove (9) and FastText (10), externally pretrained on a wide array of corpora such as Google News, Wikipedia, Twitter and Common Crawl …","url":["https://arxiv.org/pdf/1905.11985"]} +{"year":"2019","title":"All-in-One: Emotion, Sentiment and Intensity Prediction using a Multi-task Ensemble Framework","authors":["S Akhtar, D Ghosal, A Ekbal, P Bhattacharyya… - IEEE Transactions on …, 2019"],"snippet":"… IEEE TRANSACTIONS ON AFFECTIVE COMPUTING. 4 A. Deep Learning Models We employ the architecture of Figure 1a to train and tune all the deep learning models using pre-trained GloVe (common crawl 840 billion) word embeddings [32] …","url":["https://ieeexplore.ieee.org/abstract/document/8756111/"]} +{"year":"2019","title":"AMR-to-Text Generation with Cache Transition Systems","authors":["L Jin, D Gildea - arXiv preprint arXiv:1912.01682, 2019"],"snippet":"… 6.2 Setup The word embeddings are initialized with 300-dimensional GloVe embeddings (Penningtonetal., 2014) from the Common Crawl and are fixed during training. This embedding vocabulary consists of both English tokens and AMR concept labels …","url":["https://arxiv.org/pdf/1912.01682"]} +{"year":"2019","title":"An effective approach to candidate retrieval for cross-language plagiarism detection: A fusion of conceptual and keyword-based schemes","authors":["M Roostaee, MH Sadreddini, SM Fakhrahmad - Information Processing & …, 2020"],"snippet":"Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S0306457318310148"]} +{"year":"2019","title":"An Efficient Framework for Processing and Analyzing Unstructured Text to Discover Delivery Delay and Optimization of Route Planning in Realtime","authors":["M Alshaer - 2019"],"snippet":"Page 1. HAL Id: tel-02310852 https://tel.archives-ouvertes.fr/tel-02310852 Submitted on 10 Oct 2019 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not …","url":["https://tel.archives-ouvertes.fr/tel-02310852/document"]} +{"year":"2019","title":"An Empirical Evaluation of Text Representation Schemes on Multilingual Social Web to Filter the Textual Aggression","authors":["S Modha, P Majumder - arXiv preprint arXiv:1904.08770, 2019"],"snippet":"… Glove pre-trained model available with different embed size and trained on common crawl, Twitter. We have use Glove pre-trained model with vocabulary size 2.2 million and trained on common crawl. fastText pretrained models are available in 157 language …","url":["https://arxiv.org/pdf/1904.08770"]} +{"year":"2019","title":"An Empirical study on Pre-trained Embeddings and Language Models for Bot Detection","authors":["A Garcia-Silva, C Berrio, JM Gómez-Pérez - Proceedings of the 4th Workshop on …, 2019"],"snippet":"… scratch. We use pre-trained embeddings learned from Twitter it- self, urban dictionary definitions to accommodate the informal vocabulary often used in the social network, and common crawl as a general source of information …","url":["https://www.aclweb.org/anthology/W19-4317"]} +{"year":"2019","title":"An Ensemble Method for Producing Word Representations for the Greek Language","authors":["M Lioudakis, S Outsios, M Vazirgiannis - arXiv preprint arXiv:1912.04965, 2019"],"snippet":"… In addition, we show that CBOS outperforms the CBOW and Skip-gram models when they are trained on the same data. The future work of this research could include training of our newly proposed model with the Common Crawl dataset for the Greek language …","url":["https://arxiv.org/pdf/1912.04965"]} +{"year":"2019","title":"An Exploration of Sarcasm Detection Using Deep Learning","authors":["E SAVINI - 2019"],"snippet":"… not appear in the training data (”out-of-vocabulary” words). It also supports 157 different languages. In our research we use 300-dimensional word vectors pre-trained on Common Crawl3 (600B tokens). 4.3 ELMo ELMo (Embeddings …","url":["https://webthesis.biblio.polito.it/12440/1/tesi.pdf"]} +{"year":"2019","title":"An Extended CLEF eHealth Test Collection for Cross-Lingual Information Retrieval in the Medical Domain","authors":["S Saleh, P Pecina - European Conference on Information Retrieval, 2019"],"snippet":"… However, an additional assessment was performed [15]. CLEF eHealth 2018 Consumer Health Search Task released a document collection created using CommonCrawl platform [8] containing more than five million documents from more than thousand websites …","url":["https://link.springer.com/chapter/10.1007/978-3-030-15719-7_24"]} +{"year":"2019","title":"An integrated neural decoder of linguistic and experiential meaning","authors":["AJ Anderson, JR Binder, L Fernandino, CJ Humphries… - Journal of Neuroscience, 2019"],"snippet":"… occurrence 210 matrix (vocabulary size is 2.2million words and co-occurrences were measured across 840 billion 211 tokens from Common Crawl https://commoncrawl.org). GloVe in particular was used because it yielded 212 state-of …","url":["https://www.jneurosci.org/content/early/2019/09/27/JNEUROSCI.2575-18.2019.abstract"]} +{"year":"2019","title":"An Open-Domain System for Retrieval and Visualization of Comparative Arguments from Text","authors":["M Schildwächter"],"snippet":"… parts of the Page 23. 3.1. Retrieval of Sentences 19 CommonCrawl: Full text search index Retrieval of Sentences: Elasticsearch API Sentence Classification: Keyword or ML approach Sentence Ranking: Ordering of sentences","url":["https://www.inf.uni-hamburg.de/en/inst/ab/lt/teaching/theses/completed-theses/2018-ma-schildwaechter-cam.pdf"]} +{"year":"2019","title":"Analysing Coreference in Transformer Outputs","authors":["ELKC Espana-Bonet, J van Genabith","ELKC Espana-Bonet, J van Genabith - DiscoMT 2019, 2019"],"snippet":"… Recent NMT systems that include context deal with both phenomena, coreference and coherence, but usually context is limited to the previous sen- # lines S1, S3 S2 Common Crawl 2394878 x1 x4 Europarl 1775445 …","url":["https://www.aclweb.org/anthology/D19-65.pdf#page=11","https://www.cs.upc.edu/~cristinae/CV/docs/coref_DiscoMT.pdf"]} +{"year":"2019","title":"Analysing Representations of Memory Impairment in a Clinical Notes Classification Model","authors":["M Ormerod, J Martínez-del-Rincón, N Robertson… - Proceedings of the 18th …, 2019"],"snippet":"… In this study we use 300-dimensional fastText word embeddings (Bojanowski et al., 2017) which were pretrained on the Common Crawl dataset using the skipgram schema (Mikolov et al., 2013), which in- volves predicting a target word based on nearby words …","url":["https://www.aclweb.org/anthology/W19-5005"]} +{"year":"2019","title":"Analysis of Joint Multilingual Sentence Representations and Semantic K-Nearest Neighbor Graphs","authors":["H Schwenk, D Kiela, M Douze - Proceedings of the AAAI Conference on Artificial …, 2019"],"snippet":"… All the statistics in this section are calculated on ten million sentences of Common Crawl data. Please remember that we apply romanization for the languages which use a Cyrillic script (Greek, Bosnian, Bulgarian, Macedonian, Serbian and Russian) …","url":["https://www.aaai.org/ojs/index.php/AAAI/article/download/4677/4555"]} +{"year":"2019","title":"Analysis of Positional Encodings for Neural Machine Translation","authors":["J Rosendahl, VAK Tran, W Wang, H Ney"],"snippet":"… We train our models on the data from the De→En and the Zh→En news translation task of WMT 2019. For the De→En task we train on CommonCrawl, Europarl, NewsCommentary and Rapid summing up …","url":["https://zenodo.eu/record/3525024/files/IWSLT2019_paper_21.pdf"]} +{"year":"2019","title":"Annotating and Recognising Visually Descriptive Language","authors":["T Alrashid, J Wang, R Gaizauskas - … on Interoperable Semantic Annotation (ISA-15), 2019"],"snippet":"… Stop words were only removed for the word embedding representation. We did not remove stop words for tf-idf because the approach 5https://spacy. io/. We use the model en vectors web lg. 6http://commoncrawl. org/the-data/ 7http://www. scikit-learn. org/ 27 Page 34 …","url":["https://sigsem.uvt.nl/isa15/ISA-15_proceedings.pdf#page=28"]} +{"year":"2019","title":"Answering Comparative Questions: Better than Ten-Blue-Links?","authors":["M Schildwächter, A Bondarenko, J Zenker, M Hagen… - arXiv preprint arXiv …, 2019"],"snippet":"… CommonCrawl: Full text search index … Clicking on a result sentence reveals its Common Crawl context—by default the ±3 sentences around it, with the … underlying corpus of our CAM system and the keyword-based search …","url":["https://arxiv.org/pdf/1901.05041"]} +{"year":"2019","title":"Architecture for semantic search over encrypted data in the cloud","authors":["J Woodworth, MA Salehi - US Patent App. 16/168,919, 2019"],"snippet":"… The dataset has a total size of 357 MB and is made up of 6,942 text files. To evaluate S3C under large scale datasets, a second dataset, the Common Crawl Corpus from AWS (a web crawl composed of over five billion web pages) was used …","url":["https://patentimages.storage.googleapis.com/f7/5f/a4/c986d736bd81ab/US20190121873A1.pdf"]} +{"year":"2019","title":"Are we consistently biased? Multidimensional analysis of biases in distributional word vectors","authors":["A Lauscher, G Glavaš - arXiv preprint arXiv:1904.11783, 2019"],"snippet":"… 4 we compare the biases of em- beddings trained with the same model (GLOVE) but on different corpora: Common Crawl (ie, noisy … Table 4: WEAT bias effects for GLOVE embeddings trained on different corpora: Wikipedia …","url":["https://arxiv.org/pdf/1904.11783"]} +{"year":"2019","title":"Are We Safe Yet? The Limitations of Distributional Features for Fake News Detection","authors":["T Schuster, R Schuster, DJ Shah, R Barzilay - arXiv preprint arXiv:1908.09805, 2019"],"snippet":"… 2019). The generator is trained with a LM objective on a large news corpus from Common Crawl dumps. The fake news detector is a simple linear classifier on top of the last hidden state of Grover's LM on the examined article …","url":["https://arxiv.org/pdf/1908.09805"]} +{"year":"2019","title":"Argument Generation with Retrieval, Planning, and Realization","authors":["X Hua, Z Hu, L Wang - arXiv preprint arXiv:1906.03717, 2019"],"snippet":"… Wachsmuth et al., 2017b, 2018b). Recent work by Stab et al. (2018) in- dexes all web documents collected in Common Crawl, which inevitably incorporates noisy, lowquality content. Besides, existing work treats individual …","url":["https://arxiv.org/pdf/1906.03717"]} +{"year":"2019","title":"Argument Search: Assessing Argument Relevance","authors":["M Potthast, L Gienapp, F Euchner, N Heilenkötter… - 2019"],"snippet":"… online debating portals. Thereafter, ArgumenText [19], which retrieves argumentative sentences from the Common Crawl, and “multi-perspective answers” in the US version of Bing3 have been published. Another loosely related …","url":["https://webis.de/downloads/publications/papers/stein_2019j.pdf"]} +{"year":"2019","title":"Articles Classification in Myanmar Language","authors":["MS Phyu, KT Nwet - 2019 International Conference on Advanced …, 2019"],"snippet":"… Fortunately, Grave et al. [3] recently released pretrained vectors for 246 languages trained on Wikipedia and common crawl … Data source Method Number of word vectors Dimension Wikipedia fastText (skip-gram) 91,497 …","url":["https://ieeexplore.ieee.org/abstract/document/8920927/"]} +{"year":"2019","title":"Artificial Intelligence: An Overview","authors":["P Grogono"],"snippet":"Page 1. Chapter 1 Artificial Intelligence: An Overview Peter Grogono Department of Computer Science and Software Engineering Concordia University Montréal, Québec The smart devices that we have become so familiar with …","url":["https://www.worldscientific.com/doi/abs/10.1142/9789811203527_0001"]} +{"year":"2019","title":"Aspect Detection using Word and Char Embeddings with (Bi) LSTM and CRF","authors":["Ł Augustyniak, T Kajdanowicz, P Kazienko - 2019 IEEE Second International …, 2019"],"snippet":"… Glove 840B - Global Vectors for Word Representation proposed by Stanford NLP Group, trained based on Common Crawl. fastText - Distributed Word Representation proposed by Facebook, trained on Common Crawl as well …","url":["https://ieeexplore.ieee.org/abstract/document/8791735/"]} +{"year":"2019","title":"Aspect-Based Sentiment Analysis Using Deep Neural Networks and Transfer Learning","authors":["S Dugar"],"snippet":"Page 1. DEPARTMENT OF INFORMATICS TECHNISCHE UNIVERSITÄT MÜNCHEN Master's Thesis in Informatics Aspect-Based Sentiment Analysis Using Deep Neural Networks and Transfer Learning Sumit Dugar Page 2. DEPARTMENT OF INFORMATICS …","url":["https://www.social.in.tum.de/fileadmin/w00bwc/www/Gerhard_Hagerer/thesis-sumit-dugar.pdf"]} +{"year":"2019","title":"Assessing Social and Intersectional Biases in Contextualized Word Representations","authors":["YC Tan, LE Celis - arXiv preprint arXiv:1911.01485, 2019"],"snippet":"Page 1. Assessing Social and Intersectional Biases in Contextualized Word Representations Yi Chern Tan, L. Elisa Celis Yale University {yichern.tan, elisa.celis}@yale.edu Abstract Social bias in machine learning has drawn …","url":["https://arxiv.org/pdf/1911.01485"]} +{"year":"2019","title":"Assessing the Impact of Contextual Embeddings for Portuguese Named Entity Recognition","authors":["J Santos, B Consoli, C dos Santos, J Terra, S Collonini… - 2019 8th Brazilian …, 2019"],"snippet":"… 2019. [19] C. Buck, K. Heafield, and B. van Ooyen, “N-gram counts and language models from the common crawl,” in Proceedings of the Language Resources and Evaluation Conference, Reykjavik, Iceland, May 2014. [20] N …","url":["https://ieeexplore.ieee.org/abstract/document/8923652/"]} +{"year":"2019","title":"Assessing the Lexico-Semantic Relational Knowledge Captured by Word and Concept Embeddings","authors":["R Denaux, JM Gomez-Perez - arXiv preprint arXiv:1909.11042, 2019"],"snippet":"… three different corpora, which we chose to study whether relation prediction capacity varies depending on the corpus size: the English United Nations corpus[Ziemski et al., 2016] (517M tokens), the En- glish Wikipedia (just under …","url":["https://arxiv.org/pdf/1909.11042"]} +{"year":"2019","title":"Assessment of text coherence using an ontology‐based relatedness measurement method","authors":["G Giray, MO Ünalır - Expert Systems"],"snippet":"Abstract This paper proposes a novel method for assessing text coherence. Central to this approach is an ontology‐based representation of text, which captures the level of relatedness between conse...","url":["https://onlinelibrary.wiley.com/doi/abs/10.1111/exsy.12505"]} +{"year":"2019","title":"AStylometric INVESTIGATION OF CHARACTER VOICES IN LITERARY FICTION","authors":["K Vishnubhotla - 2019"],"snippet":"Page 1. ASTYLOMETRIC INVESTIGATION OF CHARACTER VOICES IN LITERARY FICTION by Krishnapriya Vishnubhotla A thesis submitted in conformity with the requirements for the degree of Master of Science …","url":["ftp://ftp.db.toronto.edu/public_html/cs/ftp/dist/gh/Vishnubhotla-MSc-thesis-2019.pdf"]} +{"year":"2019","title":"Asymmetry Sensitive Architecture for Neural Text Matching","authors":["T Belkacem, JG Moreno, T Dkaki, M Boughanem - European Conference on …, 2019"],"snippet":"… We adopted a cross-validation with \\(80\\%\\) to train, \\(10\\%\\) to test and \\(10\\%\\) to validate the different models. We used a public pre-trained 300-dimensional word vectors of GloVe 3 , which are trained in a Common crawl dataset …","url":["https://link.springer.com/chapter/10.1007/978-3-030-15719-7_8"]} +{"year":"2019","title":"Asynchronous pipelines for processing huge corpora on medium to low resource infrastructures","authors":["PJ Ortiz Suárez, B Sagot, L Romary - 2019"],"snippet":"… 7http://commoncrawl.org/about/ 8http://microformats.org/wiki/ rel-nofollow 9https://www.robotstxt.org … In order to download, extract, filter, clean and classify Common Crawl we base ourselves on … Each of these processes first …","url":["https://ids-pub.bsz-bw.de/files/9021/Suarez_Sagot_Romary_Asynchronous_Pipeline_for_Processing_Huge_Corpora_2019.pdf"]} +{"year":"2019","title":"At the Lower End of Language—Exploring the Vulgar and Obscene Side of German","authors":["E Eder, U Krieg-Holz, U Hahn - Proceedings of the Third Workshop on Abusive …, 2019"],"snippet":"… 123 FASTTEXT (Grave et al., 2018) word embeddings, the latter being based on COMMON CRAWL and WIKIPEDIA … et al., 2017) trained on German tweets (TWITTER) and, finally, FASTTEXT word embeddings (Grave et al., 2018) …","url":["https://www.aclweb.org/anthology/W19-3513"]} +{"year":"2019","title":"Attending the Emotions to Detect Online Abusive Language","authors":["NS Samghabadi, A Hatami, M Shafaei, S Kar, T Solorio - arXiv preprint arXiv …, 2019"],"snippet":"… Then, we extract 5For ask.fm data, we use 300-dimensional Common Crawl Glove pre-trained embeddings, since it works better than the Twitter embedding. the DeepMoji vector for each sentence and calculate the average vector per post …","url":["https://arxiv.org/pdf/1909.03100"]} +{"year":"2019","title":"Attention Guided Graph Convolutional Networks for Relation Extraction","authors":["Z Guo, Y Zhang, W Lu - arXiv preprint arXiv:1906.07510, 2019"],"snippet":"… 5https://nlp.stanford.edu/projects/ tacred/ 6We use the 300-dimensional Glove word vectors trained on the Common Crawl corpus https://nlp. stanford.edu/projects/glove/ 7The results are produced by the open implementation of Zhang et al. (2018). Page 6. Model …","url":["https://arxiv.org/pdf/1906.07510"]} +{"year":"2019","title":"Attenuating Bias in Word Vectors","authors":["S Dev, J Phillips - arXiv preprint arXiv:1901.07656, 2019"],"snippet":"Page 1. Attenuating Bias in Word Vectors Sunipa Dev Jeff Phillips University of Utah University of Utah Abstract Word vector representations are well developed tools for various NLP and Machine Learning tasks and are known …","url":["https://arxiv.org/pdf/1901.07656"]} +{"year":"2019","title":"Attribute Sentiment Scoring With Online Text Reviews: Accounting for Language Structure and Attribute Self-Selection","authors":["I Chakraborty, M Kim, K Sudhir - 2019"],"snippet":"Page 1. Attribute Sentiment Scoring with Online Text Reviews: Accounting for Language Structure and Attribute Self-Selection Ishita Chakraborty, Minkyung Kim, K. Sudhir Yale School of Management March 2019 We thank the …","url":["http://sics.haas.berkeley.edu/pdf_2019/paper_cks.pdf"]} +{"year":"2019","title":"Augmenting Neural Machine Translation through Round-Trip Training Approach","authors":["B Ahmadnia, BJ Dorr"],"snippet":"… For the high-resource scenario (En- Es) we utilize the English-Spanish bilingual corpora from WMT'18² [29] which contains 10M sentence pairs extracting from Europarl, News-Commentary, UN and Common Crawl collections …","url":["https://www.researchgate.net/profile/Benyamin_Ahmadnia/publication/336485784_Augmenting_Neural_Machine_Translation_through_Round-Trip_Training_Approach/links/5da2b06d92851c6b4bd100ab/Augmenting-Neural-Machine-Translation-through-Round-Trip-Training-Approach.pdf"]} +{"year":"2019","title":"AutoEncoder Guided Bootstrapping of Semantic Lexicon","authors":["C Hu, M Nakano, M Okumura - Pacific Rim International Conference on Artificial …, 2019"],"snippet":"… The top-5 best candidate instances were then added to the expanded seed list for the next iteration. For the inputs of the AutoEncoder model, Glove embeddings (300 dimensions) trained on Common Crawl were used (Pennington et al …","url":["https://link.springer.com/chapter/10.1007/978-3-030-29894-4_17"]} +{"year":"2019","title":"Automated Dictionary Creation for Analyzing Text: An Illustration from Stereotype Content","authors":["G Nicolas, X Bai, ST Fiske - 2019"],"snippet":"… The Glove word embeddings used here were trained using around 840 billion words from the common crawl (a very large database of web text), and it has word-vectors with 300 dimensions for 2.2 million words (available …","url":["https://psyarxiv.com/afm8k/download?format=pdf"]} +{"year":"2019","title":"Automated extraction of attributes from natural language attribute-based access control (ABAC) Policies","authors":["M Alohaly, H Takabi, E Blanco - Cybersecurity, 2019"],"snippet":"The National Institute of Standards and Technology (NIST) has identified natural language policies as the preferred expression of policy and implicitly called for an automated translation of ABAC...","url":["https://link.springer.com/article/10.1186/s42400-018-0019-2"]} +{"year":"2019","title":"Automated Grading of Short Text Answers: Preliminary Results in a Course of Health Informatics","authors":["G De Gasperis, S Menini, S Tonelli, P Vittorini - International Conference on Web …, 2019"],"snippet":"… vectors representing both words and sub-words. To generate these embeddings we start from the pre-computed Italian language model 3 , trained on Common Crawl and Wikipedia. The latter, in particular, is suitable for our …","url":["https://link.springer.com/chapter/10.1007/978-3-030-35758-0_18"]} +{"year":"2019","title":"Automated lifelog moment retrieval based on image segmentation and similarity scores","authors":["S Taubert, S Kahl, D Kowerko, M Eibl - CLEF2019 Working Notes. CEUR Workshop …, 2019"],"snippet":"… Page 8. 4 Resources We only used resources which were open source. Our word vectors were pretrained GloVe vectors from Common Crawl which had 300 dimensions and a vocabulary of 2.2 million tokens [27]. Furthermore …","url":["http://ceur-ws.org/Vol-2380/paper_83.pdf"]} +{"year":"2019","title":"Automated organ-level classification of free-text pathology reports to support a radiology follow-up tracking engine","authors":["JM Steinkamp, CM Chambers, D Lalevic, HM Zafar… - Radiology: Artificial …, 2019"],"snippet":"… GloVe word embeddings pretrained on the Common Crawl dataset of web pages were used as input to both networks; no performance benefit was observed from continuing to train word embeddings during the experiments …","url":["https://pubs.rsna.org/doi/abs/10.1148/ryai.2019180052"]} +{"year":"2019","title":"Automatic Knowledge Extraction to build Semantic Web of Things Applications","authors":["M Noura, A Gyrard, S Heil, M Gaedke - IEEE Internet of Things Journal, 2019"],"snippet":"… The naming process used additional hypernym information derived from WebIsALOD12 the Linked Open Data version of the WebIsA Database, a database containing 11.7 million hypernymy re- lations extracted from the CommonCrawl web corpus …","url":["http://knoesis.wright.edu/sites/default/files/IEEE_IoT_Journal_2019_Concept_Extraction_Paper_Extended.pdf"]} +{"year":"2019","title":"Automatic stance detection on political discourse in Twitter","authors":["E Zotova - 2019"],"snippet":"Page 1. Automatic Stance Detection on Political Discourse in Twitter Author: Elena Zotova Advisors: Rodrigo Agerri and German Rigau Hizkuntzaren Azterketa eta Prozesamendua Language Analysis and Processing Master's Thesis …","url":["https://addi.ehu.es/bitstream/handle/10810/36184/MAL-Elena_Zotova.pdf?sequence=1&isAllowed=y"]} +{"year":"2019","title":"Automatic Text Difficulty Estimation Using Embeddings and Neural Networks","authors":["A Filighera, T Steuer, C Rensing - European Conference on Technology Enhanced …, 2019"],"snippet":"… Next, the resulting tokens are embedded. The following pre-trained embedding models were used in our experiments: the word2vec [14], the uncased Common Crawl GloVe [15], the original ELMo [16], the uncased …","url":["https://link.springer.com/chapter/10.1007/978-3-030-29736-7_25"]} +{"year":"2019","title":"Automatic Text Summarization of News Articles in Serbian Language","authors":["D Kosmajac, V Kešelj - 2019 18th International Symposium INFOTEH …, 2019"],"snippet":"… 1). A. Auxiliary word2vec generated from Bosnian Wikidumps For input representation we used pre-trained glove word embeddings for Bosnian language2. They were trained on Common Crawl and Wikipedia using fastText [20] …","url":["https://ieeexplore.ieee.org/abstract/document/8717655/"]} +{"year":"2019","title":"Automating Analysis and Feedback to Improve Mathematics' Teachers' Classroom Discourse","authors":["A Suresh, T Sumner, J Jacobs, B Foland, W Ward - Paper submitted to the ninth …, 2019"],"snippet":"… GloVe or Global vectors for word representation is an unsupervised learning algorithm trained on aggregated word-word co-occurrence statistics from a corpus. In our model, we use the vectors trained on Common Crawl with 840 billion tokens and 300 dimensions …","url":["https://www.researchgate.net/profile/Jennifer_Jacobs8/publication/332233671_Automating_Analysis_and_Feedback_to_Improve_Mathematics%27_Teachers%27_Classroom_Discourse/links/5ca7c7394585157bd32535fc/Automating-Analysis-and-Feedback-to-Improve-Mathematics-Teachers-Classroom-Discourse.pdf"]} +{"year":"2019","title":"Automating the Fact-Checking Task: Challenges and Directions","authors":["DNE da Silva - 2019"],"snippet":"Page 1. Automating the Fact-Checking Task: Challenges and Directions Dissertation zur Erlangung des Doktorgrades (Dr. rer. nat.) der Mathematisch-Naturwissenschaftlichen Fakultät der Rheinischen Friedrich-Wilhelms-Universität Bonn …","url":["http://hss.ulb.uni-bonn.de/2019/5500/5500.pdf"]} +{"year":"2019","title":"Backlink Analyser using Apache Spark","authors":["M Zeeshan, S Asim, A Nadeem Anwar - 2019"],"snippet":"… Google Page Rank is assigned by Google based on different website factors (Design, Visitors, and Quality of content). Common Crawl [1] is an open repository for web crawl data … We will use subset of Common Crawl dataset good enough to demonstrate our system …","url":["http://dspace.cuilahore.edu.pk/xmlui/bitstream/handle/123456789/1438/SE29_Backlink%20Analyzer%20using%20Apache%20Spark.pdf?sequence=1&isAllowed=y"]} +{"year":"2019","title":"BEING PROFILED: COGITAS ERGO SUM: 10 Years of Profiling the European Citizen","authors":["I Baraliuc, E Bayamlioglu, M Hildebrandt, L Janssens - 2019"],"snippet":""} +{"year":"2019","title":"Beyond Bag-of-Concepts: Vectors of Locally Aggregated Concepts","authors":["M Grootendorst, J Vanschoren"],"snippet":"… Word2Vec pre-trained embeddings were trained on the Google News data set and contain vectors for 3 million English words.1 GloVe pre-trained embeddings were trained on the Common Crawl data set and contain vectors for 1.9 million English words.2 Pre-trained …","url":["https://ecmlpkdd2019.org/downloads/paper/489.pdf"]} +{"year":"2019","title":"Bidirectional Text Compression in External Memory","authors":["P Dinklage, J Ellert, J Fischer, D Köppl, M Penschuck - arXiv preprint arXiv …, 2019"],"snippet":"Page 1. Bidirectional Text Compression in External Memory Patrick Dinklage Technische Universität Dortmund, Department of Computer Science patrick.dinklage@ tu-dortmund.de Jonas Ellert Technische Universität Dortmund …","url":["https://arxiv.org/pdf/1907.03235"]} +{"year":"2019","title":"Big Bidirectional Insertion Representations for Documents","authors":["L Li, W Chan - arXiv preprint arXiv:1910.13034, 2019"],"snippet":"… Eu- Page 3. Figure 1: Big Bidirectional Insertion Representations for Documents roparl, Rapid, News-Commentary) and parallel sentence-level data (WikiTitles, Common Crawl, Paracrawl). The test set is newstest2019. The …","url":["https://arxiv.org/pdf/1910.13034"]} +{"year":"2019","title":"Big BiRD: A Large, Fine-Grained, Bigram Relatedness Dataset for Examining Semantic Composition","authors":["S Asaadi, SM Mohammad, S Kiritchenko"],"snippet":"… Word representations: We use GloVe word embeddings pre-trained on 840B-token CommonCrawl corpus16 and fastText word embeddings pre-trained on Common Crawl and Wikipedia using CBOW.17 For the …","url":["http://saifmohammad.com/WebDocs/BiRD-NAACL2019.pdf"]} +{"year":"2019","title":"Big Data Competence Center ScaDS Dresden/Leipzig: Overview and selected research activities","authors":["E Rahm, WE Nagel, E Peukert, R Jäkel, F Gärtner… - Datenbank-Spektrum"],"snippet":"… devise multiple research activities. The most promising direction resulted in the publication of the “Dresden WebTable Corpus” (DWTC) 2 [9] based on the freely available web crawl “CommonCrawl”. The DWTC corpus consists …","url":["https://link.springer.com/article/10.1007/s13222-018-00303-6"]} +{"year":"2019","title":"Boosting Implicit Discourse Relation Recognition with Connective-based Word Embeddings","authors":["C Wu, J Su, Y Chen, X Shi - Neurocomputing, 2019"],"snippet":"Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S0925231219312196"]} +{"year":"2019","title":"Bornholmsk Natural Language Processing: Resources and Tools","authors":["L Derczynski, ITU Copenhagen, AS Kjeldsen - Proceedings of the Nordic Conference …, 2019"],"snippet":"… of compensating for the high data sparsity. Embeddings are induced with 300 dimensions, in order to be compatible with the public Common Crawl-based FastText embeddings. Having induced these embeddings for Bornholmsk …","url":["http://www.derczynski.com/papers/bornholmsk.pdf"]} +{"year":"2019","title":"BTC-2019: The 2019 Billion Triple Challenge Dataset","authors":["JM Herrera, A Hogan, T Käfer - International Semantic Web Conference, 2019"],"snippet":"… Meusel et al. [43] have published the WebDataCommons, extracting RDFa, Microdata and Microformats from the massive Common Crawl dataset; the result is a collection of 17,241,313,916 RDF triples, which, to the best of …","url":["https://link.springer.com/chapter/10.1007/978-3-030-30796-7_11"]} +{"year":"2019","title":"Building and using parallel text for translation","authors":["M Simard - The Routledge Handbook of Translation and …, 2019"]} +{"year":"2019","title":"Building Knowledge Base through Deep Learning Relation Extraction and Wikidata","authors":["P Subasic, H Yin, X Lin"],"snippet":"… Very large high-quality training data set is then generated automatically by matching Common Crawl data with relation keywords extracted from knowledge database … We solve this problem by matching Common Crawl …","url":["http://ceur-ws.org/Vol-2350/paper5.pdf"]} +{"year":"2019","title":"BUILDING TYPE CLASSIFICATION FROM SOCIAL MEDIA TEXTS VIA GEO-SPATIAL TEXT MINING","authors":["M Häberle, M Werner, XX Zhu"],"snippet":"… We applied the ReLU activation function [18] after each hidden layer and the softmax function after the output layer. The neural network has been trained for 100 epochs and a batch size of 64 samples. 2http …","url":["https://elib.dlr.de/127637/1/preprint.pdf"]} +{"year":"2019","title":"Building Unbiased Comment Toxicity Classification Model with Natural Language Processing","authors":["LQ Huang, MJ Yu"],"snippet":"… For this project, we investigated word embeddings GloVe300D (11) and fastText300D (12), where both are pretrained on Common Crawl. We also implemented character level embedding layer introduced in the paper (13) …","url":["http://cs229.stanford.edu/proj2019spr/report/79.pdf"]} +{"year":"2019","title":"Building Unbiased Comment Toxicity Classification Model","authors":["LQ Huang, MJ Yu"],"snippet":"… combined them together to mitigate the potential biases in any embeddings. We experimented with GloVe 300D trained on Common Crawl [3] and fastText 300D trained on Common Crawl [2]. We also implemented …","url":["http://cs229.stanford.edu/proj2019spr/poster/79.pdf"]} +{"year":"2019","title":"CamemBERT: a Tasty French Language Model","authors":["L Martin, B Muller, PJO Suárez, Y Dupont, L Romary… - arXiv preprint arXiv …, 2019"],"snippet":"… Later (Grave et al., 2018) trained fastText word embeddings for 157 languages using Common Crawl and showed that using crawled data significantly increased the performance of the embeddings relatively to those trained only on Wikipedia …","url":["https://arxiv.org/pdf/1911.03894"]} +{"year":"2019","title":"Can Character Embeddings Improve Cause-of-Death Classification for Verbal Autopsy Narratives?","authors":["Z Yan, S Jeblee, G Hirst"],"snippet":"… Page 3. Figure 1: Embedding concatenation model architecture. d1 is the dimensionality of the word embedding (100), and d2 is the dimensionality of the character em- bedding (24). rived from GloVe vectors (Pennington et al., 2014) trained on Common Crawl …","url":["ftp://ftp.db.toronto.edu/public_html/cs/ftp/public_html/pub/gh/Yan-etal-2019.pdf"]} +{"year":"2019","title":"Capturing and measuring thematic relatedness","authors":["M Kacmajor, JD Kelleher - Language Resources and Evaluation, 2019"],"snippet":"Page 1. ORIGINAL PAPER Capturing and measuring thematic relatedness Magdalena Kacmajor1 • John D. Kelleher2 © The Author(s) 2019 Abstract In this paper we explain the difference between two aspects of semantic …","url":["https://link.springer.com/article/10.1007/s10579-019-09452-w"]} +{"year":"2019","title":"Capturing Discriminative Attributes Using Convolution Neural Network Over ConceptNet Numberbatch Embedding","authors":["V Vinayan, MA Kumar, KP Soman - … Research in Electronics, Computer Science and …, 2019"],"snippet":"… network. There the model is represented over the various dimensions of GloVe embedding, of those the 300 dimensions (trained over a common crawl corpus of size 840B) embedding performed the best for the task. Building …","url":["https://link.springer.com/chapter/10.1007/978-981-13-5802-9_69"]} +{"year":"2019","title":"Cardiff University at SemEval-2019 Task 4: Linguistic Features for Hyperpartisan News Detection","authors":["C Pérez-Almendros, LE Anke, S Schockaert - … of the 13th International Workshop on …, 2019"],"snippet":"… GloVe vectors (Pennington et al., 2014) for all the words occurring in them. To this end, we used the un- cased Common Crawl pretrained GloVe embeddings, with 300 dimensions and a vocabulary of 1.9 million words. The …","url":["https://www.aclweb.org/anthology/S19-2158"]} +{"year":"2019","title":"CatchPhish: detection of phishing websites by inspecting URLs","authors":["RS Rao, T Vaishnavi, AR Pais - Journal of Ambient Intelligence and Humanized …, 2019"],"snippet":"… 4.1 Dataset We have collected the dataset from three different sources. Legitimate sites are collected from common-crawl and Alexa database whereas phishing sites are collected from PhishTank … D2: Legitimate sites from common-crawl and phishing sites from PhishTank …","url":["https://link.springer.com/article/10.1007/s12652-019-01311-4"]} +{"year":"2019","title":"Categorising AWS Common Crawl Dataset using MapReduce","authors":["A Chiniah, A Chummun, Z Burkutally - 2019 Conference on Next Generation …, 2019"],"snippet":"Keeping track of websites connected to the Web is an impossible task given the amplitude and fluctuation of new sites being created and those going offline. In this paper we took the task to create a directory by categorising the websites using …","url":["https://ieeexplore.ieee.org/abstract/document/8883665/"]} +{"year":"2019","title":"Categorizing Comparative Sentences","authors":["A Panchenko, A Bondarenko, M Franzek, M Hagen…"],"snippet":"… containing both items from a web-scale corpus. Our sentence source is the publicly available in- dex of the DepCC (Panchenko et al., 2018), an index of more then 14 billion dependency-parsed English sentences from the Common Crawl filtered for duplicates …","url":["https://www.inf.uni-hamburg.de/en/inst/ab/lt/publications/2019-panchenkoetal-argminingws-compsent.pdf"]} +{"year":"2019","title":"Categorizing Emails Using Machine Learning with Textual Features","authors":["F Rudzicz, K Malikov - Advances in Artificial Intelligence: 32nd Canadian …","H Zhang, J Rangrej, S Rais, M Hillmer, F Rudzicz… - Canadian Conference on …, 2019"],"snippet":"… Lastly, domain-specific email inboxes such as DDM-Support will contain highly specific subject-matter terminology such as organization acronyms or hospital names, which are generally not present in large text corpora such as common crawl …","url":["http://books.google.de/books?hl=en&lr=lang_en&id=7GiZDwAAQBAJ&oi=fnd&pg=PA3&dq=commoncrawl&ots=pxG3RFx1gg&sig=XpsK2sgWrDKsYko4OCiNHIo5Meg","https://link.springer.com/chapter/10.1007/978-3-030-18305-9_1"]} +{"year":"2019","title":"CCMatrix: Mining Billions of High-Quality Parallel Sentences on the WEB","authors":["H Schwenk, G Wenzek, S Edunov, E Grave, A Joulin - arXiv preprint arXiv …, 2019"],"snippet":"… We are using ten snapshots of a curated common crawl corpus (Wenzek et al., 2019), totaling 32.7 billion unique sentences … we use the same underlying mining approach based on LASER and scale to a much larger …","url":["https://arxiv.org/pdf/1911.04944"]} +{"year":"2019","title":"CCMT 2019 Machine Translation Evaluation Report","authors":["M Yang, X Hu, H Xiong, J Wang, Y Jiaermuhamaiti… - China Conference on …, 2019","Y Jiaermuhamaiti, Z He, W Luo, S Huang - … , China, September 27–29, 2019, Revised …, 2019"],"snippet":"… (2) English and Chinese monolingual Corpus (Europarl v7/v8, News Commentary, Common Crawl, News Crawl, News Discussions, etc.); LDC for English and Gigaword for Chinese (LDC2011T07, LDC2009T13, LDC2007T07, LDC2009T27) …","url":["https://link.springer.com/chapter/10.1007/978-981-15-1721-1_11","https://link.springer.com/content/pdf/10.1007/978-981-15-1721-1.pdf#page=117"]} +{"year":"2019","title":"CCNet: Extracting High Quality Monolingual Datasets from Web Crawl Data","authors":["G Wenzek, MA Lachaux, A Conneau, V Chaudhary… - arXiv preprint arXiv …, 2019"],"snippet":"… (2019) used a large scale dataset based on Common Crawl to train text … Russian, Chinese and Urdu (∆ for average) for BERT-BASE models trained either on Wikipedia or CommonCrawl … We preprocess Common Crawl by …","url":["https://arxiv.org/pdf/1911.00359"]} +{"year":"2019","title":"Characterizing the impact of geometric properties of word embeddings on task performance","authors":["B Whitaker, D Newman-Griffis, A Haldar… - arXiv preprint arXiv …, 2019"],"snippet":"… 1 3M 300-d GoogleNews vectors from https:// code.google.com/archive/p/ word2vec/ 2 2M 300-d 840B Common Crawl vectors from https: //nlp.stanford.edu/projects/glove/ 3 1M 300-d WikiNews vectors with subword …","url":["https://arxiv.org/pdf/1904.04866"]} +{"year":"2019","title":"CITIZENS IN DATA LAND","authors":["AP DE VRIES"],"snippet":"… And Indie music'. 3 https://github.com/webis-de/wasp/. 4 Consider a new service provided by The Common Crawl Foundation, http://commoncrawl org/, or, alternatively, a new community service provided via public libraries …","url":["https://www.jstor.org/stable/pdf/j.ctvhrd092.19.pdf"]} +{"year":"2019","title":"CLaC at clpsych 2019: Fusion of neural features and predicted class probabilities for suicide risk assessment based on online posts","authors":["E Mohammadi, H Amini, L Kosseim - Proceedings of the Sixth Workshop on …, 2019"],"snippet":"… 2.1 Word Embeddings As shown in Figure 1, GloVe (Pennington et al., 2014) and ELMo (Peters et al., 2018) have been used as pretrained word embeddings. The 300d GloVe word embedder has been pretrained on 840B tokens of web data from Common Crawl …","url":["https://www.aclweb.org/anthology/W19-3004"]} +{"year":"2019","title":"Classification Approaches to Identify Informative Tweets","authors":["P Aggarwal - Proceedings of the Student Research Workshop …, 2019"],"snippet":"… After these preprocessing steps, we represent each posting by a dense embedding, created by the mean of the individual words embeddings. We use the pretrained embeddings provided by (Mikolov et al., 2018) …","url":["https://www.researchgate.net/profile/Piush_Aggarwal/publication/335243720_Classification_approaches_to_identify_Informative_Tweets/links/5d5af768a6fdcc55e8198141/Classification-approaches-to-identify-Informative-Tweets.pdf"]} +{"year":"2019","title":"Classification of Anti-phishing Solutions","authors":["S Chanti, T Chithralekha - SN Computer Science, 2020"],"snippet":"… WestPac. PhishTank. PIRT report. Legitimate. –. Google whitelist. Manual. Alexa, Yahoo. Web crawler. World Wide Web. WestPac. Common crawl Google search. Data set size. Phishing. 203 Archives. 200 websites. 600 …","url":["https://link.springer.com/article/10.1007/s42979-019-0011-2"]} +{"year":"2019","title":"Classification of the Answers of the OMT","authors":["F Meyer, C Biemann"],"snippet":"Page 1. Universität Hamburg Fachbereich Informatik Fakultät für Mathematik, Informatik und Naturwissenschaften Classi cation of the Answers of the OMT Bachelor-Thesis Mensch-Computer-Interaktion Arbeitsbereich …","url":["https://www.inf.uni-hamburg.de/en/inst/ab/lt/teaching/theses/completed-theses/2019-ba-meyer-omt.pdf"]} +{"year":"2019","title":"Classification of virtual patent marking web-pages using machine learning techniques","authors":["A Calvo Ibañez - 2018"],"snippet":"… Having found the best model, this was then used to predict Common Crawl instances (nonseen observations). Figure 6.1 shows a schema of the first approach. Figure 6.1 – First Approach Framework Adapted from https://www.analyticsvidhya.com …","url":["https://upcommons.upc.edu/bitstream/handle/2117/127682/133741.pdf"]} +{"year":"2019","title":"Classification of Web History Tools Through Web Analysis","authors":["JRG Evangelista, DD de Oliveira Gatto, RJ Sassi - International Conference on …, 2019"],"snippet":"… 02. Archive.fo. http://archive.fo. 03. CashedPages. http://www.cachedpages. com. 04. CachedView. http://cachedview.com. 05. Common Crawl http://commoncrawl.org. 06. Screenshots.com. http://www.screenshots.com …","url":["https://link.springer.com/chapter/10.1007/978-3-030-22351-9_18"]} +{"year":"2019","title":"Classifying Pastebin Content Through the Generation of PasteCC Labeled Dataset","authors":["A Riesco, E Fidalgo, MW Al-Nabki, F Jáñez-Martino… - International Conference on …, 2019"],"snippet":"… Panchenko et al. [17] took English text from Common Crawl and constructed a large web-scale corpus using text classification … Panchenko, A., Ruppert, E., Faralli, S., Ponzetto, SP, Biemann, C.: Building a web-scale dependency-parsed corpus from commoncrawl …","url":["https://link.springer.com/chapter/10.1007/978-3-030-29859-3_39"]} +{"year":"2019","title":"Classifying Websites Using Word Vectors and Other Techniques: An Application of Zipf's Law","authors":["A Robles - 2019"],"snippet":"Page 1. CLASSIFYING WEBSITES USING WORD VECTORS AND OTHER TECHNIQUES: AN APPLICATION OF ZIPF'S LAW A THESIS Presented to the Department of Mathematics and Statistics California State University, Long Beach In Partial Fulfillment …","url":["http://search.proquest.com/openview/7a6b97a2b1aa841198182ea77c38efb6/1?pq-origsite=gscholar&cbl=18750&diss=y"]} +{"year":"2019","title":"Clinical Data Extraction and Normalization of Cyrillic Electronic Health Records Via Deep-Learning Natural Language Processing","authors":["B Zhao - JCO Clinical Cancer Informatics, 2019"],"snippet":"… The medical text in the hematic was modified from the actual patient record for the purposes of illustration. (B) Word embeddings used were fastText embeddings pretrained on Wikipedia and Common Crawl data for English (EN) and Bulgarian (BG) …","url":["https://ascopubs.org/doi/pdfdirect/10.1200/CCI.19.00057"]} +{"year":"2019","title":"Cloze-driven Pretraining of Self-attention Networks","authors":["A Baevski, S Edunov, Y Liu, L Zettlemoyer, M Auli - arXiv preprint arXiv:1903.07785, 2019"],"snippet":"… We pretrain on individual examples as they oc- cur in the training corpora (§5.1). For News Crawl this is individual sentences while on Wikipedia, Bookcorpus, and Common Crawl examples are paragraph length … Common Crawl …","url":["https://arxiv.org/pdf/1903.07785"]} +{"year":"2019","title":"ClustCrypt: Privacy-Preserving Clustering of Unstructured Big Data in the Cloud","authors":["SM Zobaed, S Ahmad, R Gottumukkala, MA Salehi"],"snippet":"Page 1. ClustCrypt: Privacy-Preserving Clustering of Unstructured Big Data in the Cloud SM Zobaed∗, Sahan Ahmad∗, Raju Gottumukkala†, and Mohsen Amini Salehi∗ ∗ School of Computing & Informatics †Informatics Research …","url":["https://www.researchgate.net/profile/Mohsen_Salehi2/publication/333561272_ClustCrypt_Privacy-Preserving_Clustering_of_Unstructured_Big_Data_in_the_Cloud/links/5cfaa9f0299bf13a38457fe9/ClustCrypt-Privacy-Preserving-Clustering-of-Unstructured-Big-Data-in-the-Cloud.pdf"]} +{"year":"2019","title":"CluWords: Exploiting Semantic Word Clustering Representation for Enhanced Topic Modeling","authors":["F Viegas, S Canuto, C Gomes, W Luiz, T Rosa, S Ribas… - Proceedings of the Twelfth …, 2019"],"snippet":"… In this section we compare the proposed CluWords with three pre-trained word em- beddings spaces: (i) Word2Vec trained with GoogleNews [21]; (ii) FastText trained with WikiNews [22] and (iii) Fasttext trained on Common Crawl [22] …","url":["https://dl.acm.org/citation.cfm?id=3291032"]} +{"year":"2019","title":"CoFiF: A Corpus of Financial Reports in French Language","authors":["T Daudert, S Ahmadi - The First Workshop on Financial Technology and …, 2019"],"snippet":"… com/CoFiF/Corpus [Merity et al., 2016], or CommonCrawl 2. Considering the domain of business and economics, especially for English, corpora such as the Wall Street Journal (WSJ) Corpus [Paul and Baker, 1992], the …","url":["https://www.aclweb.org/anthology/W19-55#page=31"]} +{"year":"2019","title":"Cognition and the Structure of Bias","authors":["GM Johnson - 2019"],"snippet":"Page 1. UCLA UCLA Electronic Theses and Dissertations Title Cognition and the Structure of Bias Permalink https://escholarship.org/uc/item/7hf582vz Author Johnson, Gabbrielle Michelle Publication Date 2019 Peer reviewed|Thesis/dissertation eScholarship.org …","url":["https://cloudfront.escholarship.org/dist/prd/content/qt7hf582vz/qt7hf582vz.pdf"]} +{"year":"2019","title":"CogniVal: A Framework for Cognitive Word Embedding Evaluation","authors":["N Hollenstein, A de la Torre, N Langer, C Zhang - arXiv preprint arXiv:1909.09001, 2019"],"snippet":"… et al., 2018). We evaluate the embeddings with and without subword information trained on 16 billion to- kens of Wikipedia sentences as well as the ones trained on 600 billion tokens of Common Crawl. • ELMo models both …","url":["https://arxiv.org/pdf/1909.09001"]} +{"year":"2019","title":"Collaborative Attention Network with Word and N-Gram Sequences Modeling for Sentiment Classification","authors":["J Bao, L Zhang, B Han - International Conference on Artificial Neural Networks, 2019"],"snippet":"… words in text. In our experiments, we adopt global vectors for word representation (GloVe) [17] in the embedding layer of our model, which consists of 840 billion 300-dimension tokens trained on Common Crawl. We don't use …","url":["https://link.springer.com/chapter/10.1007/978-3-030-30490-4_8"]} +{"year":"2019","title":"Combating Fake News with Adversarial Domain Adaptation and Neural Models","authors":["B Xu - 2019"],"snippet":"Page 1. Combating Fake News with Adversarial Domain Adaptation and Neural Models by Brian Xu BS, Massachusetts Institute of Technology (2018) Submitted to the Department of Electrical Engineering and Computer Science …","url":["https://groups.csail.mit.edu/sls/publications/2019/BrianXu_MEng-Thesis.pdf"]} +{"year":"2019","title":"Combining and learning word embedding with WordNet for semantic relatedness and similarity measurement","authors":["YY Lee, H Ke, TY Yen, HH Huang, HH Chen - Journal of the Association for …, 2019"],"snippet":"… ahttps://nlp.stanford.edu/projects/glove/. bhttp://dumps.wikimedia.org/enwiki/20140102/. chttps://catalog.ldc.upenn.edu/LDC2011T07. dThe Common Crawl corpus contains raw web page data, extracted metadata and text extractions. http://commoncrawl.org …","url":["https://onlinelibrary.wiley.com/doi/abs/10.1002/asi.24289"]} +{"year":"2019","title":"Comparison of Machine Learning Approaches for Industry Classification Based on Textual Descriptions of Companies","authors":["A Tagarev, N Tulechki, S Boytcheva"],"snippet":"… GloVe vector embeddings are used. Specifically the 300dimensional GloVe vectors trained on the large Common Crawl corpus of 840 billion tokens with a vocabulary of 2.2 million words. While there is no additional training …","url":["https://acl-bg.org/proceedings/2019/RANLP%202019/pdf/RANLP134.pdf"]} +{"year":"2019","title":"Complex Security Policy? A Longitudinal Analysis of Deployed Content Security Policies","authors":["S Roth, T Barron, S Calzavara, N Nikiforakis, B Stock"],"snippet":"… server response. To determine this IA-specific influence, we chose a second archive service to corroborate the IA's data. In particular, Common Crawl (CC) [10] has been collecting snapshots of popular sites since 2013. For each …","url":["https://swag.cispa.saarland/papers/roth2020csp.pdf"]} +{"year":"2019","title":"Comprehensive Analysis of Aspect Term Extraction Methods using Various Text Embeddings","authors":["Ł Augustyniak, T Kajdanowicz, P Kazienko - arXiv preprint arXiv:1909.04917, 2019"],"snippet":"… 1. word2vec - protoplast model of any neural word embedding trained on Google News. 2. glove.840B - Global Vectors for Word Representation proposed by Stanford NLP Group, trained based on Common Crawl with …","url":["https://arxiv.org/pdf/1909.04917"]} +{"year":"2019","title":"Comprehensive trait attributions show that face impressions are organized in four dimensions","authors":["C Lin, U Keles, R Adolphs - 2019"],"snippet":"… we represented each of them with a vector of 300 computationally extracted semantic features (describing word embeddings and text classification) using a state-of-the-art neural network provided within the FastText library (61) …","url":["https://psyarxiv.com/87nex/download?format=pdf"]} +{"year":"2019","title":"Compressing Inverted Indexes with Recursive Graph Bisection: A Reproducibility Study","authors":["J Mackenzie, A Mallia, M Petri, JS Culpepper, T Suel"],"snippet":"… tool8, – Gov2 is a crawl of .gov domains from 2004, – ClueWeb09 and ClueWeb12 both correspond to the 'B' portion of the 2009 and 2012 ClueWeb crawls of the world wide web, respectively, and – CC-News contains English …","url":["https://jmmackenzie.io/pdf/mm+19-ecir.pdf"]} +{"year":"2019","title":"Computational Argumentation Synthesis as a Language Modeling Task","authors":["R El Baff, H Wachsmuth, K Al-Khatib, M Stede, B Stein"],"snippet":"Page 1. Computational Argumentation Synthesis as a Language Modeling Task Roxanne El Baff 1 Henning Wachsmuth 2 Khalid Al-Khatib 1 Manfred Stede 3 Benno Stein 1 1 Bauhaus-Universität Weimar, Weimar, Germany …","url":["https://webis.de/downloads/publications/papers/stein_2019y.pdf"]} +{"year":"2019","title":"Conceptor Debiasing of Word Representations Evaluated on WEAT","authors":["S Karve, L Ungar, J Sedoc - arXiv preprint arXiv:1906.05993, 2019"],"snippet":"… For context-independent embeddings, we used off-the-shelf Fasttext subword embeddings6, which were trained with subword information on the Common Crawl (600B tokens), the GloVe embeddings 7 trained on Wikipedia and …","url":["https://arxiv.org/pdf/1906.05993"]} +{"year":"2019","title":"Constructing the Wavelet Tree and Wavelet Matrix in Distributed Memory","authors":["P Dinklage, J Fischer, F Kurpicz - 2020 Proceedings of the Twenty-Second Workshop on …"],"snippet":"Page 1. Constructing the Wavelet Tree and Wavelet Matrix in Distributed Memory ∗ Patrick Dinklage † Johannes Fischer† Florian Kurpicz† Abstract The wavelet tree (Grossi et al. [SODA,2003]) is a compact index for texts …","url":["https://epubs.siam.org/doi/abs/10.1137/1.9781611976007.17"]} +{"year":"2019","title":"Content Similarity Analysis of Written Comments under Posts in Social Media","authors":["M Mozafari, R Farahbakhsh, N Crespi"],"snippet":"… The GloVe vectors were trained from 840 billion tokens of Common Crawl web data and have 300 dimensions [23]. This feature is extracted similar to the Google-word2vec similarity by using equation 6 for each post and comment pair …","url":["http://servicearchitecture.wp.imtbs-tsp.eu/files/2019/09/RC_SNAMS2019_37.pdf"]} +{"year":"2019","title":"Context Matters: Recovering Human Semantic Structure from Machine Learning Analysis of Large-Scale Text Corpora","authors":["MC Iordan, T Giallanza, CT Ellis, N Beckage, JD Cohen - arXiv preprint arXiv …, 2019"],"snippet":"… We also compared performance of the four Word2Vec embedding spaces to another commonly used embedding space known as GloVe28 for two main reasons; first, the GloVe embeddings are learned from the Common …","url":["https://arxiv.org/pdf/1910.06954"]} +{"year":"2019","title":"Context-Aware Crosslingual Mapping","authors":["H Aldarmaki, M Diab - arXiv preprint arXiv:1903.03243, 2019"],"snippet":"… data. We trained monolingual ELMo and FastText with de- fault parameters. We used the WMT'13 commoncrawl data for cross-lingual mapping, and the WMT'13 test sets for evaluating sentence translation retrieval. For all …","url":["https://arxiv.org/pdf/1903.03243"]} +{"year":"2019","title":"Continual Learning for Sentence Representations Using Conceptors","authors":["T Liu, L Ungar, J Sedoc - arXiv preprint arXiv:1904.09187, 2019"],"snippet":"… zero-shot CA. Best results are in boldface and the second best results are underscored. dimensional GloVe vectors (trained on the 840 billion token Common Crawl) (Pennington et al., 2014). Additional experiments with Word2Vec …","url":["https://arxiv.org/pdf/1904.09187"]} +{"year":"2019","title":"CONTRIBUTIONS TO CLINICAL INFORMATION EXTRACTION IN PORTUGUESE: CORPORA, NAMED ENTITY RECOGNITION, WORD EMBEDDINGS","authors":["FA da Costa Lopes - 2019"],"snippet":"Page 1. Fábio André da Costa Lopes CONTRIBUTIONS TO CLINICAL INFORMATION EXTRACTION IN PORTUGUESE:CORPORA,NAMED ENTITY RECOGNITION,WORD EMBEDDINGS Thesis submitted to the Faculty of Science …","url":["https://www.researchgate.net/profile/Fabio_Lopes17/publication/335639414_Contributions_to_Clinical_Information_Extraction_in_Portuguese_Corpora_Named_Entity_Recognition_Word_Embeddings/links/5d717af2a6fdcc9961b1facd/Contributions-to-Clinical-Information-Extraction-in-Portuguese-Corpora-Named-Entity-Recognition-Word-Embeddings.pdf"]} +{"year":"2019","title":"Controlling Grammatical Error Correction Using Word Edit Rate","authors":["K Hotate, M Kaneko, S Katsumata, M Komachi - … of the 57th Conference of the …, 2019"],"snippet":"… the lowest WER). In the ranking experiment, we used a 5-gram KenLM (Heafield, 2011) with Kneser-Ney smoothing trained on the web-scale Common Crawl corpus (Junczys-Dowmunt and Grundkiewicz, 2016). As an evaluation …","url":["https://www.aclweb.org/anthology/P19-2020"]} +{"year":"2019","title":"Conversing by Reading: Contentful Neural Conversation with On-demand Machine Reading","authors":["L Qin, M Galley, C Brockett, X Liu, X Gao, B Dolan… - arXiv preprint arXiv …, 2019"],"snippet":"… informative response. To enable reproducibility of our experiments, we crawled web pages using Common Crawl (http://commoncrawl.org), a service that crawls web pages and makes its historical crawls available to the public. We …","url":["https://arxiv.org/pdf/1906.02738"]} +{"year":"2019","title":"Correlation Coefficients and Semantic Textual Similarity","authors":["V Zhelezniak, A Savkov, A Shen, NY Hammerla - arXiv preprint arXiv:1905.07790, 2019"],"snippet":"… In all experiments we rely on the following publicly available word embeddings: GloVe (Pennington et al., 2014) trained on Common Crawl (840B tokens), fastText (Bojanowski et al., 2017) trained on Common Crawl …","url":["https://arxiv.org/pdf/1905.07790"]} +{"year":"2019","title":"Correlations between Word Vector Sets","authors":["V Zhelezniak, A Shen, D Busbridge, A Savkov…"],"snippet":"… For methods involving pretrained word embeddings, we use fastText (Bo- janowski et al., 2017) trained on Common Crawl (600B tokens), as previous evaluations have in- dicated that fastText vectors have uniformly the best …","url":["https://www.april.sh/assets/files/emnlp2019.pdf"]} +{"year":"2019","title":"Creation of Sentence Embeddings Based on Topical Word Representations","authors":["P Wenig"],"snippet":"Page 1. Topical Sentence Embeddings Creation of Sentence Embeddings Based on Topical Word Representations Phillip Wenig 160361 Master's thesis to obtain the degree of Master of Science in Information Systems University of Liechtenstein …","url":["https://www.researchgate.net/profile/Phillip_Wenig/publication/330761695_Creation_of_Sentence_Embeddings_Based_on_Topical_Word_Representations/links/5c531d44458515a4c74d4719/Creation-of-Sentence-Embeddings-Based-on-Topical-Word-Representations.pdf"]} +{"year":"2019","title":"Cross-collection Multi-aspect Sentiment Analysis","authors":["H Kaporo - Computer Science On-line Conference, 2019"],"snippet":"… 3.1, we set \\(n=10\\), \\(m=5\\) and run the algorithm over topic-words returned by CPTM and word vectors from glove pre-trained embeddings 2 . These embeddings are trained over common crawl (Google data) and contain 2.2 Million words …","url":["https://link.springer.com/chapter/10.1007/978-3-030-19810-7_11"]} +{"year":"2019","title":"Cross-Domain Sentiment Classification With Bidirectional Contextualized Transformer Language Models","authors":["B Myagmar, J Li, S Kimura - IEEE Access, 2019"],"snippet":"… For pre-training data, in addition to the BookCorpus and English Wikipedia datasets, cased XLNet-Large model, refered to simply as XLNet henceforth, uses Giga5 (16GB text) [35], ClueWeb 2012-B [36] and Common Crawl [37] as part of its pre-training data …","url":["https://ieeexplore.ieee.org/iel7/6287639/8600701/08894409.pdf"]} +{"year":"2019","title":"Cross-Layer Optimization of Big Data Transfer Throughput and Energy Consumption","authors":["L Di Tacchio, MDSQZ Nine, T Kosar, MF Bulut… - 2019 IEEE 12th …, 2019"],"snippet":"… The algorithms have been compared using four different datasets: i) a small files dataset, including 20,000 HTML files form the Common Crawl project [1]; ii) a medium files dataset, consisting of 5,000 image files from Flickr …","url":["https://ieeexplore.ieee.org/abstract/document/8814571/"]} +{"year":"2019","title":"Cross-Lingual Alignment of Word & Sentence Embeddings","authors":["H Aldarmaki - 2019"],"snippet":"Page 1. Cross-Lingual Alignment of Word & Sentence Embeddings by Hanan Aldarmaki B.Sc. in Computer Engineering, May 2008, The American University of Sharjah M.Phil. in Computer Speech, Text, and Internet Technology, May 2009, University of Cambridge …","url":["http://search.proquest.com/openview/97f58b5d99e2ed81065054a170f2dcda/1?pq-origsite=gscholar&cbl=18750&diss=y"]} +{"year":"2019","title":"Cross-lingual Data Transformation and Combination for Text Classification","authors":["J Jiang, S Pang, X Zhao, L Wang, A Wen, H Liu… - arXiv preprint arXiv …, 2019"],"snippet":"… There are word vectors for 157 languages1, trained on Common Crawl and Wikipedia, and these models were trained using CBOW with position-weights, in dimension 300, with character n-grams of length 5, a window size of 5 and 10 negatives …","url":["https://arxiv.org/pdf/1906.09543"]} +{"year":"2019","title":"Cross-lingual Parsing with Polyglot Training and Multi-treebank Learning: A Faroese Case Study","authors":["J Barry, J Wagner, J Foster - arXiv preprint arXiv:1910.07938, 2019"],"snippet":"… the source languages. We use the precomputed Word2Vec embeddings11 released as part of the 2017 CoNLL shared task on UD parsing (Zeman et al., 2017) which were trained on CommonCrawl and Wikipedia. In order to …","url":["https://arxiv.org/pdf/1910.07938"]} +{"year":"2019","title":"Cross-Platform Evaluation for Italian Hate Speech Detection","authors":["M Corazza, S Menini, E Cabrio, S Tonelli, S Villata…"],"snippet":"… Generic embeddings: we use embedding spaces obtained directly from the Fasttext website4 for Italian. In particular, we use the Italian embeddings trained on Common Crawl and Wikipedia (Grave et al., 2018) with size 300 …","url":["http://ceur-ws.org/Vol-2481/paper22.pdf"]} +{"year":"2019","title":"CrossLang: the system of cross-lingual plagiarism detection","authors":["O Bakhteev, A Ogaltsov, A Khazov, K Safin… - 2019"],"snippet":"… They were obtained from open-source parallel OPUS [46] corpora, but also we mine parallel sentences from Common Crawl.4 Algo … the machine translation stage generates texts that differ too much from 3https://tensorflow …","url":["http://ml4ed.cc/attachments/Bakhteev.pdf"]} +{"year":"2019","title":"CUNI Submission for Low-Resource Languages in WMT News 2019","authors":["T Kocmi, O Bojar - Proceedings of the Fourth Conference on Machine …, 2019"],"snippet":"… Words in English Commoncrawl Russian-English 878k 17.4M 18.8M … Words News crawl 2018 EN 15.4M 344.3M Common Crawl KK 12.5M 189.2M News commentary KK 13.0k 218.7k News crawl Kk 772.9k 10.3M Common Crawl …","url":["https://www.aclweb.org/anthology/W19-5322"]} +{"year":"2019","title":"CUNI System for the WMT19 Robustness Task","authors":["J Helcl, J Libovický, M Popel - arXiv preprint arXiv:1906.09246, 2019"],"snippet":"… Corpus # Sentences P arallel 109 English-French Corpus 22,520k Europarl 2,007k News Commentary 200k UN Corpus 12,886k Common Crawl 3,224k M onoFrench News Crawl ('08–'14) 37,320k English News Crawl ('11–'17) 127,554k …","url":["https://arxiv.org/pdf/1906.09246"]} +{"year":"2019","title":"Curriculum Learning for Domain Adaptation in Neural Machine Translation","authors":["X Zhang, P Shapiro, G Kumar, P McNamee, M Carpuat… - arXiv preprint arXiv …, 2019"],"snippet":"… and WMT 2017 (Bojar et al., 2017), which contains data from several domains, eg parliamentary proceedings (Europarl, UN Parallel Corpus), political/economic news (news commentary, Rapid corpus), and …","url":["https://arxiv.org/pdf/1905.05816"]} +{"year":"2019","title":"Customizing Neural Machine Translation for Subtitling","authors":["E Matusov, P Wilken, Y Georgakopoulou - Proceedings of the Fourth Conference on …, 2019"],"snippet":"… These data included all other publicly available training data, including ParaCrawl, CommonCrawl, EUbookshop, JRCAcquis, EMEA, and other corpora from the OPUS collection … This was done to avoid oversampling …","url":["https://www.aclweb.org/anthology/W19-5209"]} +{"year":"2019","title":"D-NET: A Simple Framework for Improving the Generalization of Machine Reading Comprehension","authors":["H Li, X Zhang, Y Liu, Y Zhang, Q Wang, X Zhou, J Liu…"],"snippet":"… by introducing two-stream self attention. Besides BooksCorpus and Wikipedia, on which the BERT is trained, XLNET uses more corpus in its pretraining, including Giga5, ClueWeb and Common Crawl. In our system, we use …","url":["https://mrqa.github.io/assets/papers/64_Paper.pdf"]} +{"year":"2019","title":"Détection automatique de la thématique et adaptation des modèles de langage","authors":["S ZHANG"],"snippet":"Page 1. Si ZHANG SIGMA 2018/2019 AUTHÔT 52Av. Pierre Sémard 94200 Ivry-sur-Seine Détection automatique de la thématique et adaptation des modèles de langage from 01/03/2019 to 16/08/2019 Confidentiality : yes …","url":["ftp://ftp.irit.fr/IRIT/SAMOVA/INTERNSHIPS/zhang_si_2019.pdf"]} +{"year":"2019","title":"Danish Stance Classification and Rumour Resolution","authors":["AE Lillie, ER Middelboe - arXiv preprint arXiv:1907.01304, 2019"],"snippet":"Page 1. IT University of Copenhagen MSc in Software Development Thesis project KISPECI1SE Danish Stance Classification and Rumour Resolution Authors: Anders E. Lillie aedl@itu.dk Emil R. Middelboe erem@itu.dk …","url":["https://arxiv.org/pdf/1907.01304"]} +{"year":"2019","title":"Data Augmentation in Deep Learning for Hate Speech Detection in Lower Resource Settings","authors":["M Benk"],"snippet":"Page 1. Masterarbeit zur Erlangung des akademischen Grades Master of Arts der Philosophischen Fakultät der Universität Zürich Data Augmentation in Deep Learning for Hate Speech Detection in Lower Resource Settings …","url":["https://www.cl.uzh.ch/dam/jcr:57406b34-02c8-496d-9b95-9968cee3a134/benk_ma_data_augmentation.pdf"]} +{"year":"2019","title":"Data4UrbanMobility: Towards Holistic Data Analytics for Mobility Applications in Urban Regions","authors":["N Tempelmeier, Y Rietz, I Lishchuk, T Kruegel… - arXiv preprint arXiv …, 2019"],"snippet":"… Web Event-centric Web markup Annotated Web pages, eg us- ing schema.org. Web Data Commons event subset: 263 × 106 facts until November 2017 Common Crawl ToU RDFa, MicroData Focused crawls Event-centric crawls, news11 …","url":["https://arxiv.org/pdf/1903.12064"]} +{"year":"2019","title":"Debiasing Embeddings for Reduced Gender Bias in Text Classification","authors":["F Prost, N Thain, T Bolukbasi - Proceedings of the First Workshop on Gender Bias in …, 2019"],"snippet":"… 2 Classification Task This work utilizes the BiosBias dataset introduced in (De-Arteaga et al., 2019). This dataset consists of biographies identified within the Common Crawl 397,340 biographies were extracted from sixteen crawls from 2014 to 2018 …","url":["https://www.aclweb.org/anthology/W19-3810"]} +{"year":"2019","title":"DebiasingWord Embeddings Improves Multimodal Machine Translation","authors":["T Hirasawa, M Komachi - arXiv preprint arXiv:1905.10464, 2019"],"snippet":"Page 1. Debiasing Word Embeddings Improves Multimodal Machine Translation Tosho Hirasawa Tokyo Metropolitan University hirasawa-tosho@ed.tmu.ac.jp Mamoru Komachi Tokyo Metropolitan University komachi@tmu.ac.jp Abstract …","url":["https://arxiv.org/pdf/1905.10464"]} +{"year":"2019","title":"Deca: A Garbage Collection Optimizer for In-Memory Data Processing","authors":["X Shi, Z Ke, Y Zhou, H Jin, L Lu, X Zhang, L He, Z Hu… - ACM Transactions on …, 2019"],"snippet":"Page 1. 3 Deca: A Garbage Collection Optimizer for In-Memory Data Processing XUANHUA SHI and ZHIXIANG KE, Huazhong University of Science and Technology, China YONGLUAN ZHOU, University of Copenhagen, Denmark …","url":["https://dl.acm.org/ft_gateway.cfm?id=3310361&type=pdf"]} +{"year":"2019","title":"DECO: A Dataset of Annotated Spreadsheets for Layout and Table Recognition","authors":["E Koci, M Thiele, J Rehak, O Romero, W Lehner - the 15th IAPR International …, 2019"],"snippet":"… 50 files). The performance is manually assessed per file. 1http://info.nuix.com/Enron. html 2http://commoncrawl.org/ 3http://lemurproject.org/clueweb09.php/ Koci et al. [15] use a dataset of 216 annotated spreadsheets. Unlike …","url":["https://wwwdb.inf.tu-dresden.de/wp-content/uploads/deco_paper.pdf"]} +{"year":"2019","title":"Deep Contextualized Word Embeddings in Transition-Based and Graph-Based Dependency Parsing--A Tale of Two Parsers Revisited","authors":["A Kulmizev, M de Lhoneux, J Gontrum, E Fano, J Nivre - arXiv preprint arXiv …, 2019"],"snippet":"… 2018), who train ELMo on 20 million words randomly sampled from raw WikiDump and Common Crawl datasets for … In other words, while the standalone ELMo models were trained on the tokenized WikiDump and …","url":["https://arxiv.org/pdf/1908.07397"]} +{"year":"2019","title":"Deep Learning for NLP and Speech Recognition","authors":["U Kamath, J Liu, J Whitaker"],"snippet":"Page 1. Uday Kamath · John Liu · James Whitaker Deep Learning for NLP and Speech Recognition Page 2. Deep Learning for NLP and Speech Recognition Page 3. Uday Kamath • John Liu • James Whitaker Deep …","url":["https://link.springer.com/content/pdf/10.1007/978-3-030-14596-5.pdf"]} +{"year":"2019","title":"Deep learning for pollen allergy surveillance from twitter in Australia","authors":["J Rong, S Michalska, S Subramani, J Du, H Wang - BMC Medical Informatics and …, 2019"],"snippet":"… embeddings - as alternative. The pre-trained Common Crawl 840B tokens GloVe embeddings were downloaded from the website 2 . Both 50 dimensions (min) and 300 dimensions (max) options were tested. The HF embeddings …","url":["https://link.springer.com/article/10.1186/s12911-019-0921-x"]} +{"year":"2019","title":"Deep learning models for speech recognition","authors":["A Hannun, C Case, J Casper, B Catanzaro, G DIAMOS… - US Patent App. 16/542,243, 2019"],"snippet":"US20190371298A1 - Deep learning models for speech recognition - Google Patents. Deep learning models for speech recognition. Download PDF Info. Publication number US20190371298A1. US20190371298A1 US16/542,243 …","url":["https://patents.google.com/patent/US20190371298A1/en"]} +{"year":"2019","title":"Deep Learning vs. Classic Models on a New Uzbek Sentiment Analysis Dataset","authors":["E Kuriyozov, S Matlatipov, MA Alonso…"],"snippet":"… We use as input the FastText pre-trained word embeddings of size 300 (Grave et al., 2018) for Uzbek language, that were created from Wiki pages and CommonCrawl, 9 which, to our knowledge, are the only available pre-trained …","url":["http://www.grupolys.org/biblioteca/KurMatAloGom2019a.pdf"]} +{"year":"2019","title":"Deep Learning-based Categorical and Dimensional Emotion Recognition for Written and Spoken Text","authors":["BT Atmaja, K Shirai, M Akagi - INA-Rxiv. June, 2019"],"snippet":"… meaning. Glove captured the global corpus statistics from the corpus, for example, a Wikipedia document or a common crawl document. In GloVe model, the cost function is given by V ∑ i,j=1 f(Xi,j)(uT i,jvj + bi + cj − log Xi,j)2 (2) …","url":["https://osf.io/fhu29/download/?format=pdf"]} +{"year":"2019","title":"Deep Structured Semantic Model for Recommendations in E-commerce","authors":["A Larionova, P Kazakova, N Nikitinsky - International Conference on Hybrid Artificial …, 2019"],"snippet":"… We generated a vector representation for each text by inferring FastText embeddings [4] from their tokens and averaging them (FastText model is pretrained on the Russian language subset of the Common Crawl corpus [10]) …","url":["https://link.springer.com/chapter/10.1007/978-3-030-29859-3_8"]} +{"year":"2019","title":"Deepening Hidden Representations from Pre-trained Language Models for Natural Language Understanding","authors":["J Yang, H Zhao - arXiv preprint arXiv:1911.01940, 2019","JYH Zhao"],"snippet":"… objective during pre-training on the other hand. In addition to BooksCorpus and English Wikipedia, it also uses Giga5, ClueWeb 2012-B and Common Crawl for pre-training. Trained with dynamic masking, large mini-batches …","url":["https://arxiv.org/pdf/1911.01940","https://deeplearn.org/arxiv/101390/deepening-hidden-representations-from-pre-trained-language-models-for-natural-language-understanding"]} +{"year":"2019","title":"Defending Against Neural Fake News","authors":["R Zellers, A Holtzman, H Rashkin, Y Bisk, A Farhadi… - arXiv preprint arXiv …, 2019"],"snippet":"… Dataset. We present RealNews, a large corpus of news articles from Common Crawl … Thus, we construct one by scraping dumps from Common Crawl, limiting ourselves to the 5000 news domains indexed by Google News …","url":["https://arxiv.org/pdf/1905.12616"]} +{"year":"2019","title":"Deliverable 4.2: Data Integration (v. 1)","authors":["A Haller, JD Fernández, A Polleres, MR Kamdar - Work, 2019"],"snippet":"Page 1. Cyber-Physical Social Systems for City-wide Infrastructures Deliverable 4.2: Data Integration (v.1) Authors : Armin Haller, Javier D. Fernández, Axel Polleres, Maulik R. Kamdar Dissemination Level : Public Due date …","url":["http://cityspin.net/wp-content/uploads/2017/10/D4.2-Data-Integration.pdf"]} +{"year":"2019","title":"Design and implementation of an open source Greek POS Tagger and Entity Recognizer using spaCy","authors":["E Partalidou, E Spyromitros-Xioufis, S Doropoulos… - IEEE/WIC/ACM International …, 2019"],"snippet":"… 3.4 Evaluation and comparison of results In the first experiment the model was trained using pretrained vectors extracted from two different sources, Common Crawl and Wikipedia and can be found at the official FastText …","url":["https://dl.acm.org/citation.cfm?id=3352543"]} +{"year":"2019","title":"Detecting Aggression and Toxicity using a Multi Dimension Capsule Network","authors":["S Srivastava, P Khurana - Proceedings of the Third Workshop on Abusive …, 2019"],"snippet":"… The code for tokenization was taken from (Devlin et al., 2018) which seems to properly separate the word tokens and special characters. For training all our classification models, we have used fastText embeddings of dimension 300 trained on a common crawl …","url":["https://www.aclweb.org/anthology/W19-3517"]} +{"year":"2019","title":"Detecting associations between dietary supplement intake and sentiments within mental disorder tweets","authors":["Y Wang, Y Zhao, J Zhang, J Bian, R Zhang - Health Informatics Journal, 2019"],"snippet":"Many patients with mental disorders take dietary supplement, but their use patterns remain unclear. In this study, we developed a method to detect signals of associations between dietary supplement...","url":["https://journals.sagepub.com/doi/full/10.1177/1460458219867231"]} +{"year":"2019","title":"Detecting Clitics Related Orthographic Errors in Turkish","authors":["U Arıkan, O Güngör, S Uskudarli"],"snippet":"… For this task, GloVe was used with the dimension size of 300 and window size of 15. The pretrained word vectors for Turkish were obtained from the model trained on Common Crawl and Wikipedia using fastText (Grave et al., 2018) …","url":["https://www.researchgate.net/profile/Onur_Guengoer2/publication/337425054_Detecting_Clitics_Related_Orthographic_Errors_in_Turkish/links/5dd7e92792851c1feda68471/Detecting-Clitics-Related-Orthographic-Errors-in-Turkish.pdf"]} +{"year":"2019","title":"Detecting Hacker Threats: Performance of Word and Sentence Embedding Models in Identifying Hacker Communications","authors":["AL Queiroz, S Mckeever, B Keegan"],"snippet":"… model Source Dim. size MDL-1 SVM WEMB Word2vec Google News 300 MDL-2 SVM WEMB Glove Common Crawl 300 … MDL-4 SVM SEMB InferSent Wikipedia 4096 MDL-5 SVM SEMB SentEncoder Wiki, Web News, SNLI …","url":["http://aics2019.datascienceinstitute.ie/papers/aics_13.pdf"]} +{"year":"2019","title":"Detecting Incivility and Impoliteness in Online Discussions. Classification Approaches for German User Comments.","authors":["A Stoll, M Ziegele, O Quiring - 2019"],"snippet":"Page 1. DETECTING INCIVILITY AND IMPOLITENESS Preprint on SSRN, 22.11.2019 Anke Stoll HHU Düsseldorf anke.stoll@hhu.de Marc Ziegele HHU Düsseldorf marc.ziegele@hhu.de Oliver Quiring JGU Main quiring@uni-mainz.de ABSTRACT …","url":["https://osf.io/preprints/socarxiv/a47ch/download"]} +{"year":"2019","title":"Detecting offensive language using transfer learning","authors":["A de Bruijn, V Muhonen, T Albinonistraat, W Fokkink… - 2019"],"snippet":"Page 1. Detecting offensive language using transfer learning Alissa de Bruijn September 2019 Page 2. Master Thesis Business Analytics Detecting offensive language using transfer learning Author: Alissa de Bruijn …","url":["https://beta.vu.nl/nl/Images/stageverslag-bruijn_tcm235-926516.pdf"]} +{"year":"2019","title":"Detecting Relational States in Online Social Networks","authors":["J Zhang, L Tan, X Tao, T Pham, X Zhu, H Li, L Chang - 2018 5th International …, 2019"],"snippet":"… Since the social network we obtain from the repositories of common crawl contains missing links and partial information, stochastic estimations are used to measure the accuracy and reliability of our experimental MVVA results [19] …","url":["https://ieeexplore.ieee.org/abstract/document/8697237/"]} +{"year":"2019","title":"Detecting Topic-Oriented Speaker Stance in Conversational Speech}}","authors":["C Lai, B Alex, JD Moore, L Tian, T Hori, G Francesca - Proc. Interspeech 2019, 2019"],"snippet":"… 3.3. Lexical Features We build lexical representations over turns and topic segments using 300 dimensional GloVe word embeddings (Common Crawl, 840B tokens) [26]. We perform basic tokenization to map between the CallHome transcripts and word embeddings …","url":["https://www.isca-speech.org/archive/Interspeech_2019/pdfs/2632.pdf"]} +{"year":"2019","title":"Detection of contradictions in pairs of texts in Kazakh","authors":["Y Yamalutdinova - 2019"],"snippet":"Page 1. BACHELOR THESIS Yuliya Yamalutdinova Detection of contradictions in pairs of texts in Kazakh Institute of Formal and Applied Linguistics Supervisor of the bachelor thesis: Mgr. Rudolf Rosa, Ph.D. Study programme: Computer Science …","url":["https://dspace.cuni.cz/bitstream/handle/20.500.11956/109076/130266752.pdf?sequence=1"]} +{"year":"2019","title":"Determining How Citations Are Used in Citation Contexts","authors":["M Färber, A Sampath - International Conference on Theory and Practice of …, 2019"],"snippet":"… 2. See https://fasttext.cc/. The pretrained vectors were trained on Common Crawl and Wikipedia using the CBOW model of fastText. fastText operates at the character level, and therefore can generate vectors for words not seen in the training corpus …","url":["https://link.springer.com/chapter/10.1007/978-3-030-30760-8_38"]} +{"year":"2019","title":"Development of a Song Lyric Corpus for the English Language","authors":["MAG Rodrigues, A de Paiva Oliveira, A Moreira - International Conference on …, 2019"],"snippet":"… languages. According to the authors, the texts that compose the corpus were extracted from CommonCrawl (commoncrawl.org), the largest publicly available general Web crawl to date with about 2 billion crawled URLs. The …","url":["https://link.springer.com/chapter/10.1007/978-3-030-23281-8_33"]} +{"year":"2019","title":"Development of an End-to-End Deep Learning Pipeline","authors":["M Nitsche, S Halbritter"],"snippet":"Page 1. Development of an End-to-End Deep Learning Pipeline Matthias Nitsche, Stephan Halbritter {matthias.nitsche, stephan.halbritter}@hawhamburg.de Hamburg University of Applied Sciences, Department of …","url":["https://users.informatik.haw-hamburg.de/~ubicomp/projekte/master2019-proj/nitsche-halbritter.pdf"]} +{"year":"2019","title":"Digital audio track suggestions for moods identified using analysis of objects in images from video content","authors":["N Brochu - US Patent App. 15/392,705, 2019"],"snippet":"… the response from the natural language model 240. Exemplary training corpuses of words can include, for example, Common Crawl (eg, 840B tokens, 2.2M vocabulary terms). The natural language model 240 can be improved …","url":["http://www.freepatentsonline.com/10276189.html"]} +{"year":"2019","title":"Diversicon: Pluggable Lexical Domain Knowledge","authors":["G Bella, F McNeill, D Leoni, FJQ Real, F Giunchiglia - Journal on Data Semantics, 2019"],"snippet":"Page 1. Journal on Data Semantics https://doi.org/10.1007/s13740-019-00107-1 ORIGINAL ARTICLE Diversicon: Pluggable Lexical Domain Knowledge Gábor Bella1 · Fiona McNeill2 · David Leoni1 · Francisco José Quesada Real3 · Fausto Giunchiglia1 …","url":["https://link.springer.com/article/10.1007/s13740-019-00107-1"]} +{"year":"2019","title":"Do It Like a Syntactician: Using Binary Gramaticality Judgements to Train Sentence Encoders and Assess Their Sensitivity to Syntactic Structure","authors":["P Gonzalez Martinez - 2019"],"snippet":"Page 1. City University of New York (CUNY) CUNY Academic Works Dissertations, Theses, and Capstone Projects Graduate Center 9-2019 Do It Like a Syntactician: Using Binary Gramaticality Judgements to Train …","url":["https://academicworks.cuny.edu/cgi/viewcontent.cgi?article=4521&context=gc_etds"]} +{"year":"2019","title":"Do It Like a Syntactician: Using Binary Grammaticality Judgements to Train Sentence Encoders and Assess Their Sensitivity to Syntactic Structure","authors":["PG Martinez - 2019"],"snippet":"Page 1. Do it like a syntactician: using binary grammaticality judgments to train sentence encoders and assess their sensitivity to syntactic structure by Pablo González Mart´ınez A dissertation submitted to the Graduate Faculty in Linguistics in partial fulfillment of the …","url":["http://search.proquest.com/openview/f9bad35a6cc78921f32a9f8f6e0efde3/1?pq-origsite=gscholar&cbl=18750&diss=y"]} +{"year":"2019","title":"Do We Really Need Fully Unsupervised Cross-Lingual Embeddings?","authors":["I Vulić, G Glavaš, R Reichart, A Korhonen - arXiv preprint arXiv:1909.01638, 2019"],"snippet":"… Monolingual Embeddings. We use the 300-dim vectors of Grave et al. (2018) for all 15 languages, pretrained on Common Crawl and Wikipedia with fastText (Bojanowski et al., 2017).7 We trim all 5While BLI is an intrinsic task, as discussed by Glavaš et al …","url":["https://arxiv.org/pdf/1909.01638"]} +{"year":"2019","title":"Document Embedding Models on Environmental Legal Documents","authors":["S Kralj, Ž Urbancic, E Novak, K Kenda"],"snippet":"… Instead of having aa large vocabulary of pre-computed word embeddings trained on Wikipedia and Common Crawl, this newly trained model is trained on documents from a more specific domain - resulting in a vocabulary …","url":["http://ailab.ijs.si/dunja/SiKDD2019/Papers/Kralj_Urbancic_Final.pdf"]} +{"year":"2019","title":"Document Summarization Using Sentence-Level Semantic Based on Word Embeddings","authors":["K Al-Sabahi, Z Zuping - International Journal of Software Engineering and …, 2019"],"snippet":"… Training is performed on aggregated global word-word co-occurrence statistics from a corpus. In this work, we use the one trained on Common Crawl (840B tokens, 2.2M vocab, cased, 300d vectors, 2.03 GB download): glove.840B. 300d.zip …","url":["https://www.worldscientific.com/doi/abs/10.1142/S0218194019500086"]} +{"year":"2019","title":"Domain adaptation for part-of-speech tagging of noisy user-generated text","authors":["L März, D Trautmann, B Roth - arXiv preprint arXiv:1905.08920, 2019"],"snippet":"… The pretrained vectors for German are based on Wikipedia articles and data from Common Crawl3. We obtain 97.988 different embeddings for the tokens in TIGER and the Twitter corpus of which 75.819 were already contained …","url":["https://arxiv.org/pdf/1905.08920"]} +{"year":"2019","title":"Domain-specific word embeddings for patent classification","authors":["J Risch, R Krestel - Data Technologies and Applications, 2019"],"snippet":"… used to train word embeddings. It contains more than twice the number of tokens of the English Wikipedia (16bn) and is only exceeded by the Common Crawl data set, which consists of 600bn tokens. We assume that the embeddings …","url":["https://www.emeraldinsight.com/doi/abs/10.1108/DTA-01-2019-0002"]} +{"year":"2019","title":"Dual Monolingual Cross-Entropy Delta Filtering of Noisy Parallel Data","authors":["A Axelrod, A Kumar, S Sloto - Proceedings of the Fourth Conference on Machine …, 2019"],"snippet":"… gual English corpus comparable in size and content to the Sinhala one by randomly selecting 150k lines from Wikipedia and 6M lines from Common Crawl … Each SentencePiece model was trained on 1M lines of monolin …","url":["https://www.aclweb.org/anthology/W19-5433"]} +{"year":"2019","title":"Dynamic Packed Compact Tries Revisited","authors":["K Tsuruta, D Köppl, S Kanda, Y Nakashima, S Inenaga… - arXiv preprint arXiv …, 2019"],"snippet":"… name column. • commoncrawl is a web crawl containing the ASCII-encoded content (without HTML tags) of random web pages extracted from Common Crawl. • vital is the main text extracted from the most vital Wikipedia articles …","url":["https://arxiv.org/pdf/1904.07467"]} +{"year":"2019","title":"Dynamically Route Hierarchical Structure Representation to Attentive Capsule for Text Classification","authors":["W Zheng, Z Zheng, H Wan, C Chen"],"snippet":"Page 1. Dynamically Route Hierarchical Structure Representation to Attentive Capsule for Text Classification Wanshan Zheng1,2 , Zibin Zheng1,2 , Hai Wan1 , Chuan Chen1,2 1School of Data and Computer Science, Sun Yat …","url":["https://www.ijcai.org/proceedings/2019/0759.pdf"]} +{"year":"2019","title":"EasyChair Preprint","authors":["NS Resolution - 2019"],"snippet":"… Fancellu et al. (2016) show in their work that Page 4. Crawl data set with 840B tokens. Additionally we will try 300-dimensional pre-trained fastText2 that were also trained on Common Crawl but on a subset of 600B tokens. This differs from Fancellu et al …","url":["https://easychair.org/publications/preprint_download/QHml"]} +{"year":"2019","title":"EDA: Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks","authors":["JW Wei, K Zou - arXiv preprint arXiv:1901.11196, 2019"],"snippet":"… We suspect that EDA will work with any thesaurus. Word embeddings. We use 300-dimensional Common-Crawl word embeddings trained using GloVe (Pennington et al., 2014). We suspect that EDA will work with any pre-trained word embeddings. CNN …","url":["https://arxiv.org/pdf/1901.11196"]} +{"year":"2019","title":"Edge Computing for User-Centric Secure Search on Cloud-Based Encrypted Big Data","authors":["S Ahmad, SM Zobaed, R Gottumukkala, MA Salehi - arXiv preprint arXiv:1908.03668, 2019"],"snippet":"… We used two datasets, namely Amazon Common Crawl Corpus (ACCC) [35] and Request For Comments (RFC) [36], that have distinct characteristics and volumes. ACCC is ≈ 150 terabytes, contains web contents, and is not domain-specific …","url":["https://arxiv.org/pdf/1908.03668"]} +{"year":"2019","title":"EDUCE: Explaining model Decisions through Unsupervised Concepts Extraction","authors":["D Bouchacourt, L Denoyer - arXiv preprint arXiv:1905.11852, 2019"],"snippet":"… We test on the full test dataset composed of 7, 600 samples. We use pre-trained word vectors trained on Common Crawl [8], and keep them fixed … We use pre-trained word vectors trained on Common Crawl [8], and keep them fixed …","url":["https://arxiv.org/pdf/1905.11852"]} +{"year":"2019","title":"Efficient Classification and Unsupervised Keyphrase Extraction for Web Pages","authors":["T Haarman - 2019"],"snippet":"Page 1. MASTER'S THESIS Efficient Classification and Unsupervised Keyphrase Extraction for Web Pages Tim Haarman s2404184 Department of Artificial Intelligence University of Groningen, The Netherlands Primary …","url":["https://www.ai.rug.nl/~mwiering/Thesis_Tim_Haarman.pdf"]} +{"year":"2019","title":"Efficient Contextual Representation Learning With Continuous Outputs","authors":["LH Li, PH Chen, CJ Hsieh, KW Chang - Transactions of the Association for …, 2019"],"snippet":"Create a new account. Email. Returning user. Can't sign in? Forgot your password? Enter your email address below and we will send you the reset instructions. Email. Cancel. If the address matches an existing account you will …","url":["https://www.mitpressjournals.org/doi/full/10.1162/tacl_a_00289"]} +{"year":"2019","title":"Efficient Contextual Representation Learning Without Softmax Layer","authors":["LH Li, PH Chen, CJ Hsieh, KW Chang - arXiv preprint arXiv:1902.11269, 2019"],"snippet":"… The output layer is a sampled softmax with 8192 negative samples per batch. This model is provided in AllenNLP by Peters et al. (2018a). • ELMO-S: The input layer is the FastText embedding trained on Common Crawl (Mikolovetal …","url":["https://arxiv.org/pdf/1902.11269"]} +{"year":"2019","title":"Efficient Sentence Embedding using Discrete Cosine Transform","authors":["N Almarwani, H Aldarmaki, M Diab - arXiv preprint arXiv:1909.03104, 2019"],"snippet":"… CoordInv Coordination Inversion Table 1: Probing Tasks 3.2 Experimental setup For the word embeddings, we use pre-trained FastText embeddings of size 300 (Mikolov et al., 2018) trained on Common-Crawl. We generate DCT …","url":["https://arxiv.org/pdf/1909.03104"]} +{"year":"2019","title":"Embedding Imputation with Grounded Language Information","authors":["Z Yang, C Zhu, V Sachidananda, E Darve - arXiv preprint arXiv:1906.03753, 2019"],"snippet":"… KG2Vec 0.02% 7% 0.04% 12% 58.6 56.9 60.1 54.3 GloVe Common Crawl 1% 29% 2% 44% 44.0 33.0 45.1 27.3 … We test on two types of pre-trained word vectors GloVe (Common crawl, cased 300d) and ConceptNet Numberbatch (300d) …","url":["https://arxiv.org/pdf/1906.03753"]} +{"year":"2019","title":"EmbNum+: Effective, Efficient, and Robust Semantic Labeling for Numerical Values","authors":["P Nguyen, K Nguyen, R Ichise, H Takeda - New Generation Computing, 2019"],"snippet":"… Data portals. For example, 233 million tables were extracted from the July 2015 version of the Common Crawl [10].1 Additionally, 200,000 tables from 232 Open Data portals were analyzed by Mitlohner et al. [12]. These resources …","url":["https://link.springer.com/article/10.1007/s00354-019-00076-w"]} +{"year":"2019","title":"Emerging Cross-lingual Structure in Pretrained Language Models","authors":["A Conneau, S Wu, H Li, L Zettlemoyer, V Stoyanov - … of the 58th Annual Meeting of …, 2020","S Wu, A Conneau, H Li, L Zettlemoyer, V Stoyanov - arXiv preprint arXiv:1911.01464, 2019"],"snippet":"… We consider domain difference by training on Wikipedia for English and a random subset of Common Crawl of the same size for the other languages (Wiki-CC). We also consider a model trained with Wikipedia only, the same as XLM (Default) for comparison …","url":["https://arxiv.org/pdf/1911.01464","https://www.aclweb.org/anthology/2020.acl-main.536.pdf"]} +{"year":"2019","title":"Emoji Powered Capsule Network to Detect Type and Target of Offensive Posts in Social Media","authors":["H Hettiarachchi, T Ranasinghe"],"snippet":"… Also character embeddings handle in- frequent words better than word2vec embedding as later one suffers from lack of enough training opportunity for those rare words. We used fasttext embeddings pre trained on Common Crawl (Mikolov et al., 2018) …","url":["https://www.researchgate.net/profile/Tharindu_Ranasinghe2/publication/336775156_Emoji_Powered_Capsule_Network_to_Detect_Type_and_Target_of_Offensive_Posts_in_Social_Media/links/5db1a79992851c577eba8219/Emoji-Powered-Capsule-Network-to-Detect-Type-and-Target-of-Offensive-Posts-in-Social-Media.pdf"]} +{"year":"2019","title":"EmoLabel: Semi-Automatic Methodology for Emotion Annotation of Social Media Text","authors":["L Canales, W Daelemans, E Boldrini, P Martínez-Barco - IEEE Transactions on …, 2019"],"snippet":"Page 1. 1949-3045 (c) 2019 IEEE. Personal use is permitted, but republication/ redistribution requires IEEE permission. See http://www.ieee.org/ publications_standards/publications/rights/index.html for more information. This …","url":["https://ieeexplore.ieee.org/abstract/document/8758380/"]} +{"year":"2019","title":"EMOMINER at SemEval-2019 Task 3: A Stacked BiLSTM Architecture for Contextual Emotion Detection in Text","authors":["N Chakravartula, V Indurthi - Proceedings of the 13th International Workshop on …, 2019"],"snippet":"… them with GloVe vectors. As a result, Glove vectors will have syntactic information of words (Rezaeinia et al., 2017). 3.2 Feature Extraction • Word Embeddings: Glove840B - common crawl (Pennington et al., 2014) pre-trained …","url":["https://www.aclweb.org/anthology/S19-2033"]} +{"year":"2019","title":"Emotional Embeddings: Refining Word Embeddings to Capture Emotional Content of Words","authors":["A Seyeditabari, N Tabari, S Gholizadeh, W Zadrozny - arXiv preprint arXiv …, 2019"],"snippet":"… vector spaces used here are: • Word2Vec trained full English Wikipedia dump • GloVe from their own website • fastText trained with subword information on Common Crawl • ConceptNet Numberbatch It is clear that each emotionally …","url":["https://arxiv.org/pdf/1906.00112"]} +{"year":"2019","title":"Encoder-Decoder Network with Cross-Match Mechanism for Answer Selection","authors":["Z Xie, X Yuan, J Wang, S Ju - China National Conference on Chinese Computational …, 2019"],"snippet":"… 4.2 Implementation Details. We initialized word embedding with 300d-GloVe vectors pre-trained from the 840B Common Crawl corpus [8], while the word embeddings for the out-of-vocabulary words were initialized randomly …","url":["https://link.springer.com/chapter/10.1007/978-3-030-32381-3_6"]} +{"year":"2019","title":"End-to-End Learning of Co-Speech Gesture Generation for Humanoid Robots","authors":["Y Yoon, WR Ko, M Jang, J Lee, J Kim"],"snippet":"… We used the pretrained word embedding model GloVe, trained on the Common Crawl corpus [5]. The dimension of word embedding is 300, and a zero vector is used for unknown words. A gesture is represented as a sequence of human poses …","url":["http://robotics.auckland.ac.nz/wp-content/uploads/2018/06/Final_ETRI_YoungwooYoon.pdf"]} +{"year":"2019","title":"End-to-end Neural Information Retrieval","authors":["W Yang - 2019"],"snippet":"Page 1. End-to-end Neural Information Retrieval by Wei Yang A thesis presented to the University of Waterloo in fulfillment of the thesis requirement for the degree of Master in Computer Science Waterloo, Ontario, Canada, 2019 c Wei Yang 2019 Page 2 …","url":["https://uwspace.uwaterloo.ca/bitstream/handle/10012/14597/Yang_Wei.pdf?sequence=4&isAllowed=y"]} +{"year":"2019","title":"End-to-End Speech Recognition","authors":["U Kamath, J Liu, J Whitaker - Deep Learning for NLP and Speech Recognition, 2019"],"snippet":"… of certain words. Therefore, an n-gram language model was trained using the KenLM [Hea+ 13] toolkit on the Common Crawl Repository, 1 using the 400,000 most frequent words from 250 million lines of text. The decoding …","url":["https://link.springer.com/chapter/10.1007/978-3-030-14596-5_12"]} +{"year":"2019","title":"English-Czech Systems in WMT19: Document-Level Transformer","authors":["M Popel, D Macháček, M Auersperger, O Bojar… - arXiv preprint arXiv …, 2019"],"snippet":"… brevity. sentence words (k) data set pairs (k) EN CS CzEng 1.7 57 065 618 424 543 184 Europarl v7 647 15 625 13 000 News Commentary v12 211 4 544 4 057 CommonCrawl 162 3 349 2 927 WikiTitles 361 896 840 EN NewsCrawl …","url":["https://arxiv.org/pdf/1907.12750"]} +{"year":"2019","title":"Enhancing AMR-to-Text Generation with Dual Graph Representations","authors":["LFR Ribeiro, C Gardent, I Gurevych - arXiv preprint arXiv:1909.00352, 2019"],"snippet":"… 5 Experiments and Discussion Implementation Details We extract vocabularies (size of 20,000) from the training sets and initialize the node embeddings from GloVe word em- beddings (Pennington et al., 2014) on Common Crawl …","url":["https://arxiv.org/pdf/1909.00352"]} +{"year":"2019","title":"Enhancing Semantic Word Representations by Embedding Deeper Word Relationships","authors":["A Nugaliyadde, KW Wong, F Sohel, H Xie - arXiv preprint arXiv:1901.07176, 2019"],"snippet":"… 2 https://commoncrawl.org/ differentiating similarity from association and relatedness which is reflected in table 1. The context to create the word embedding in order to test on SimLex-999 is created based on Common …","url":["https://arxiv.org/pdf/1901.07176"]} +{"year":"2019","title":"Environmental hazards, rigid institutions, and transformative change: How drought affects the consideration of water and climate impacts in infrastructure management","authors":["N Ulibarri, TA Scott - Global Environmental Change, 2019"],"snippet":"Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S0959378019302213"]} +{"year":"2019","title":"eTranslation's Submissions to the WMT 2019 News Translation Task","authors":["C Oravecz, K Bontcheva, A Lardilleux, L Tihanyi… - Proceedings of the Fourth …, 2019"],"snippet":"… En→De the reduction in ParaCrawl was from 31M to 18M segments and in CommonCrawl from 2.3M to 1.4M segments with a drop of 0.2 BLEU points compared to us- ing the full sets3. No additional cleaning was ap- plied to …","url":["https://www.aclweb.org/anthology/W19-5334"]} +{"year":"2019","title":"Europarl-ST: A Multilingual Corpus For Speech Translation Of Parliamentary Debates","authors":["J Iranzo-Sánchez, JA Silvestre-Cerdà, J Jorge… - arXiv preprint arXiv …, 2019"],"snippet":"… De↔Fr eubookshop, JRC-Acquis, 14.3 TildeModel En↔Es commoncrawl, eubookshop, 21.1 EU-TT2, UN, Wikipedia En↔Fr commoncrawl, giga, 38.2 undoc, news-commentary Es↔Fr DGT, eubookshop, 37.2 JRC-Acquis, UNPC …","url":["https://arxiv.org/pdf/1911.03167"]} +{"year":"2019","title":"Evaluating Commonsense in Pre-trained Language Models","authors":["X Zhou, Y Zhang, L Cui, D Huang - arXiv preprint arXiv:1911.11931, 2019"],"snippet":"… Note that XLNet-base is trained with the same data as BERT, while XLNet-large is trained with a larger dataset that consists of 32.98B subword pieces coming from Wiki, BookCorpus, Giga5, ClueWeb, and Common Crawl. RoBERTa (Liu et al …","url":["https://arxiv.org/pdf/1911.11931"]} +{"year":"2019","title":"Evaluating KGR10 Polish word embeddings in the recognition of temporal expressions using BiLSTM-CRF.","authors":["J Kocoń, M Gawor - arXiv preprint arXiv:1904.04055, 2019"],"snippet":"… The second one, called FASTTEXT4, is original FastText word embeddings set, created for 157 languages (including Polish). Authors used Wikipedia and Common Crawl5 as the linguistic data source … C2 Common Crawl …","url":["https://arxiv.org/pdf/1904.04055"]} +{"year":"2019","title":"Evaluating the Supervised and Zero-shot Performance of Multi-lingual Translation Models","authors":["C Hokamp, J Glover, D Gholipour - arXiv preprint arXiv:1906.09675, 2019"],"snippet":"… We use all available parallel data from the WMT19 news-translation task for training, with the exception of commoncrawl, which we found to be very noisy after manually checking a sample of the data, and paracrawl, which …","url":["https://arxiv.org/pdf/1906.09675"]} +{"year":"2019","title":"Evaluation of basic modules for isolated spelling error correction in Polish texts","authors":["S Rutkowski - arXiv preprint arXiv:1905.10810, 2019"],"snippet":"… How this representation is constructed is informed by the whole corpus on which the embedder was trained. The pretrained ELMo model that we used (Che et al., 2018) was trained on Wikipedia and Common Crawl corpora of Polish …","url":["https://arxiv.org/pdf/1905.10810"]} +{"year":"2019","title":"Evaluation of Czech Distributional Thesauri","authors":["P Rychlý - RASLAN 2019 Recent Advances in Slavonic Natural …, 2019"],"snippet":"… The results are summarized in Table 2. The czTenTen12 corpus was evaluated with Sketch Engine thesaurus and also with word vectors compiled by FastText. We have also included prebuild model from Common Crawl. Table …","url":["http://raslan2019.nlp-consulting.net/proceedings/raslan19.pdf#page=145","https://nlp.fi.muni.cz/raslan/raslan19.pdf#page=145"]} +{"year":"2019","title":"Evaluation of State Of Art Open-source ASR Engines with Local Inferencing","authors":["B Rizk"],"snippet":"Page 1. Institute Of Information Systems (iisys) Hof University in exchange for Media Engineering and Technology Faculty German University in Cairo Evaluation of State Of Art Open-source ASR Engines with Local …","url":["https://www.researchgate.net/profile/Basem_Rizk/publication/335524542_Evaluation_of_State_Of_Art_Open-source_ASR_Engines_with_Local_Inferencing/links/5d6aa4ae299bf1808d5c87dd/Evaluation-of-State-Of-Art-Open-source-ASR-Engines-with-Local-Inferencing.pdf"]} +{"year":"2019","title":"Evaluation of vector embedding models in clustering of text documents","authors":["T Walkowiak, M Gniewkowski"],"snippet":"… The second group of sources of word2vec models for Polish are web pages of word embedding tools like fastText, ELMo and BERT. They were trained on Polish Common Crawl and Wikipedia. However, the BERT …","url":["https://acl-bg.org/proceedings/2019/RANLP%202019/pdf/RANLP149.pdf"]} +{"year":"2019","title":"Event-Argument Linking in Hindi for Information Extraction in Disaster Domain","authors":["SK Sahoo, S Saha, A Ekbal, P Bhattacharyya…"],"snippet":"… 3.3 Word Embedding For word embedding (WE) of each word, we use pre-trained fastText [5] word vectors. These embeddings were trained on Hindi Common Crawl and Wikipedia dataset. The size of the word embedding used in our experiments is 300 …","url":["https://www.iitp.ac.in/~ai-nlp-ml/papers/Sovan_CICLing_2019.pdf"]} +{"year":"2019","title":"Evolution of the PAN Lab on Digital Text Forensics","authors":["P Rosso, M Potthast, B Stein, E Stamatatos, F Rangel… - … Retrieval Evaluation in a …, 2019"],"snippet":"… The static web search environment is comprised of the web search engine ChatNoir, which indexes the ClueWeb 2009, the ClueWeb 2012, and (as of 2017) the CommonCrawl, delivering search results in milliseconds while …","url":["https://link.springer.com/chapter/10.1007/978-3-030-22948-1_19"]} +{"year":"2019","title":"Example-Driven Question Answering","authors":["D Wang - 2019"],"snippet":"Page 1. August 19, 2019 DRAFT Example-Driven Question Answering Di Wang August 2019 School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Thesis Committee: Eric Nyberg (chair) (Carnegie Mellon …","url":["http://www.cs.cmu.edu/~diw1/thesis.pdf"]} +{"year":"2019","title":"Explicit Discourse Argument Extraction for German","authors":["P Bourgonje, M Stede - International Conference on Text, Speech, and …, 2019"],"snippet":"… The word embeddings are trained on Common Crawl and Wikipedia [6]. We generated the part-of-speech embeddings from the TIGER corpus [4]. We use a CNN with four fully connected layers. Training this on all classes from Table 1 results in an accuracy of 94.52 …","url":["https://link.springer.com/chapter/10.1007/978-3-030-27947-9_3"]} +{"year":"2019","title":"Exploitation vs. exploration—computational temporal and semantic analysis explains semantic verbal fluency impairment in Alzheimer's disease","authors":["J Tröger, N Linz, A König, P Robert, J Alexandersson… - Neuropsychologia, 2019"],"snippet":"… For deriving semantic metrics, the semantic distance between produced words was calculated based on a fastText (Joulin et al., 2016) neural word embedding, pre-trained on the French Common Crawl and Wikipedia corpora (Grave et al., 2018; Linz et al., 2017) …","url":["https://www.sciencedirect.com/science/article/pii/S0028393218305116"]} +{"year":"2019","title":"Exploiting EuroVoc's Hierarchical Structure for Classifying Legal Documents","authors":["E Filtz, S Kirrane, A Polleres, G Wohlgenannt - … \" On the Move to Meaningful Internet …, 2019","G Wohlgenannt - On the Move to Meaningful Internet Systems: OTM …"],"snippet":"… First, we tested large-scale pre-trained language models trained with generalpurpose text corpora such as GoogleNews and the CommonCrawl, but as expected both performed badly on the legal dataset, for example the Common …","url":["http://books.google.de/books?hl=en&lr=lang_en&id=hm21DwAAQBAJ&oi=fnd&pg=PA164&dq=commoncrawl&ots=pdUzWNZpkR&sig=nBv58MNJuj5jkkROfHpNIYLbyTs","https://link.springer.com/chapter/10.1007/978-3-030-33246-4_10"]} +{"year":"2019","title":"Exploiting knowledge graphs for entity-centric prediction","authors":["S Jiang - 2018"],"snippet":"Page 1. © 2018 Shan Jiang Page 2. EXPLOITING KNOWLEDGE GRAPHS FOR ENTITY-CENTRIC PREDICTION BY SHAN JIANG DISSERTATION Submitted in partial fulfillment of the requirements for the degree of Doctor of …","url":["https://www.ideals.illinois.edu/bitstream/handle/2142/102463/JIANG-DISSERTATION-2018.pdf?sequence=1"]} +{"year":"2019","title":"Exploiting Temporal Relationships in Video Moment Localization with Natural Language","authors":["S Zhang, J Su, J Luo - arXiv preprint arXiv:1908.03846, 2019"],"snippet":"… extracted from VGG [24] fc7 layer, optical flow features are extracted from the penultimate layer [27] and the 300-d Glove feature [21] pretrained on Common Crawl (42 billion tokens) are used as the word embedding. The segment …","url":["https://arxiv.org/pdf/1908.03846"]} +{"year":"2019","title":"Exploiting the Hierarchical Structure of a Thesaurus for Document Classification","authors":["E Filtz, S Kirrane, A Polleres, G Wohlgenannt"],"snippet":"… First, we tested large-scale pre-trained language models trained with generalpropose text corpora such as GoogleNews and the CommonCrawl, but as ex- pected both performed badly on the legal dataset, for example the …","url":["https://aic.ai.wu.ac.at/~polleres/publications/filt-etal-2019COOPIS.pdf"]} +{"year":"2019","title":"Explore FREDDY","authors":["M Günther, M Thiele, W Lehner, Z Yanakiev - BTW 2019, 2019"],"snippet":"… The configuration of the search function is defined in the sidebar (Figure 3b) just as in the query view. 4 Screencast on our FREDDY website https://wwwdb.inf.tu-dresden.de/research-projects/freddy/ 5 https://dblp.uni …","url":["https://dl.gi.de/bitstream/handle/20.500.12116/21558/E08-1.pdf?sequence=1&isAllowed=y"]} +{"year":"2019","title":"Explore FREDDY: Fast Word Embeddings in Database Systems","authors":["M Günther, Z Yanakiev, M Thiele, W Lehner"],"snippet":"… The configuration of the search function is defined in the sidebar (Figure 3b) just as in the query view. 4 Screencast on our FREDDY website https://wwwdb.inf.tu-dresden.de/research-projects/freddy/ 5 https://dblp.uni …","url":["https://btw.informatik.uni-rostock.de/download/tagungsband/E08-1.pdf"]} +{"year":"2019","title":"Exploring Numeracy in Word Embeddings","authors":["A Naik, A Ravichander, C Rose, E Hovy - Proceedings of the 57th Conference of the …, 2019"],"snippet":"… FastText (Bojanowski et al., 2017): Extended Skipgram model representing words as character n-grams to incorporate sub-word information. We evaluate Wikipedia and Common Crawl variants. 3.1 Retrained Word Vectors …","url":["https://www.aclweb.org/anthology/P19-1329"]} +{"year":"2019","title":"Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer","authors":["C Raffel, N Shazeer, A Roberts, K Lee, S Narang… - arXiv preprint arXiv …, 2019"],"snippet":"… unsupervised pre-training for NLP is particularly attractive because unlabeled text data is available en masse thanks to the Internet – for example, the Common Crawl project2 produces about 20TB of text data extracted from web pages each month …","url":["https://arxiv.org/pdf/1910.10683"]} +{"year":"2019","title":"Extending Cross-Domain Knowledge Bases with Long Tail Entities using Web Table Data","authors":["Y Oulabi, C Bizer - genre, 2019"],"snippet":"… In a second experiment, we apply the system to a large corpus of web tables extracted from the Common Crawl. This experiment allows us to get an overall im- pression of the potential of web tables for augmenting knowledge bases with long tail entities …","url":["https://www.uni-mannheim.de/media/Einrichtungen/dws/Files_Research/Web-based_Systems/pub/OulabiBizer-LongTailEntities-EDBT2019.pdf"]} +{"year":"2019","title":"Extracting and Analyzing Context Information in User-Support Conversations on Twitter","authors":["D Martens, W Maalej - arXiv preprint arXiv:1907.13395, 2019"],"snippet":"… As the list of marketing names also includes common words (eg, 'five', 'go', or 'plus'), we used the natural language processing library spaCy [33] to remove words that appear in the vocabulary of the included …","url":["https://arxiv.org/pdf/1907.13395"]} +{"year":"2019","title":"Extracting Novel Facts from Tables for Knowledge Graph Completion (Extended version)","authors":["B Kruit, P Boncz, J Urbani - arXiv preprint arXiv:1907.00083, 2019"],"snippet":"… The first one is the T2D dataset [23], which contains a subset of the WDC Web Tables Corpus – a set of tables extracted from the CommonCrawl web scrape6. We use the latest available version of this dataset (v2, released 2017/02). In …","url":["https://arxiv.org/pdf/1907.00083"]} +{"year":"2019","title":"Extracting Novel Facts from Tables for Knowledge Graph Completion","authors":["B Kruit, P Boncz, J Urbani - International Semantic Web Conference, 2019"],"snippet":"… The first one is the T2D dataset [25], which contains a subset of the WDC Web Tables Corpus – a set of tables extracted from the CommonCrawl web scrape 2 . We use the latest available version of this dataset (v2, released 2017/02) …","url":["https://link.springer.com/chapter/10.1007/978-3-030-30793-6_21"]} +{"year":"2019","title":"Facebook AI's WAT19 Myanmar-English Translation Task Submission","authors":["PJ Chen, J Shen, M Le, V Chaudhary, A El-Kishky… - arXiv preprint arXiv …, 2019"],"snippet":"… For Myanmar language, we take five snapshots of the Commoncrawl dataset and combine them with the raw data from Buck et al. (2014) … The Myanmar monolingual data we collect from Commoncrawl contains text in both Unicode and Zawgyi encodings …","url":["https://arxiv.org/pdf/1910.06848"]} +{"year":"2019","title":"Facebook FAIR's WMT19 News Translation Task Submission","authors":["N Ng, K Yee, A Baevski, M Ott, M Auli, S Edunov - arXiv preprint arXiv:1907.06616, 2019"],"snippet":"… We train two language models LI and LN on Newscrawl and Commoncrawl respectively, then score every sentence s in Commoncrawl by HI(s)−HN (s). We select a cu- toff of 0.01, and use all sentences that score higher than …","url":["https://arxiv.org/pdf/1907.06616"]} +{"year":"2019","title":"Facilitating access to health web pages with different language complexity levels","authors":["M Alfano, B Lenzitti, D Taibi, M Helfert - 2019"],"snippet":"… The Web Data Commons (WDC) (Meusel, 2014) contains all Microformat, Microdata and RDFa data extracted from the open repository of web crawl data named Common Crawl (CC)16 … 15 http://webdatacommons.org/ 16 http://commoncrawl.org …","url":["http://doras.dcu.ie/23104/1/ICT4AWE_2019_30_CR.pdf"]} +{"year":"2019","title":"Fast and Accurate Network Embeddings via Very Sparse Random Projection","authors":["H Chen, SF Sultan, Y Tian, M Chen, S Skiena - arXiv preprint arXiv:1908.11512, 2019"],"snippet":"… WWW-200K and WWW-10K [11]: these graphs are derived from the Web graph provided by Common Crawl, where the nodes are hostnames and the edges are the hyperlinks between these websites. For simplicity, we treat this graph as an undirected graph …","url":["https://arxiv.org/pdf/1908.11512"]} +{"year":"2019","title":"Faster Neural Network Training with Data Echoing","authors":["D Choi, A Passos, CJ Shallue, GE Dahl - arXiv preprint arXiv:1907.05550, 2019"],"snippet":"… 2http://commoncrawl.org/2017/07/june-2017-crawl-archive-now-available/ 3Each time a training example is read from disk, it counts as a fresh example. 420k steps for LM1B, 60k for Common Crawl, 110k for ImageNet, 150k for CIFAR-10, and 30k for COCO …","url":["https://arxiv.org/pdf/1907.05550"]} +{"year":"2019","title":"FastSV: A Distributed-Memory Connected Component Algorithm with Fast Convergence","authors":["Y Zhang, A Azad, Z Hu - arXiv preprint arXiv:1910.05971, 2019"],"snippet":"Page 1. FastSV: A Distributed-Memory Connected Component Algorithm with Fast Convergence Yongzhe Zhang ∗ Ariful Azad † Zhenjiang Hu ‡ Abstract This paper presents a new distributed-memory algorithm called FastSV …","url":["https://arxiv.org/pdf/1910.05971"]} +{"year":"2019","title":"FastText-Based Intent Detection for Inflected Languages","authors":["K Balodis, D Deksne - Information, 2019"],"snippet":"… For the word embeddings released by Facebook, we used the ones trained on Wikipedia (https: //fasttext.cc/docs/en/pretrained-vectors.html) because the ones trained on Common Crawl (https: //fasttext.cc/docs/en/crawl-vectors.html) showed inferior results in our tests …","url":["https://www.mdpi.com/2078-2489/10/5/161/pdf"]} +{"year":"2019","title":"Feature Engineering for Text Representation","authors":["D Sarkar - Text Analytics with Python, 2019"],"snippet":"In the previous chapters, we saw how to understand, process, and wrangle text data. However, all machine learning or deep learning models are limited because they cannot understand text data directly...","url":["https://link.springer.com/chapter/10.1007/978-1-4842-4354-1_4"]} +{"year":"2019","title":"Feature-Dependent Confusion Matrices for Low-Resource NER Labeling with Noisy Labels","authors":["L Lange, MA Hedderich, D Klakow - arXiv preprint arXiv:1910.06061, 2019"],"snippet":"… clustering. While the Brown clustering was trained on the relatively small Europarl corpus, k- Means clustering seems to benefit from the word embeddings trained on documents from the much larger common crawl. 7 Analysis …","url":["https://arxiv.org/pdf/1910.06061"]} +{"year":"2019","title":"Feature2Vec: Distributional semantic modelling of human property knowledge","authors":["S Derby, P Miller, B Devereux - arXiv preprint arXiv:1908.11439, 2019"],"snippet":"… For our experiments, we make use of the pretrained GloVe embeddings (Pennington et al., 2014) provided in the Spacy1 package trained on the Common Crawl2. The GloVe model includes 685,000 tokens … 1https://spacy …","url":["https://arxiv.org/pdf/1908.11439"]} +{"year":"2019","title":"Feeling Anxious? Perceiving Anxiety in Tweets using Machine Learning","authors":["D Gruda, S Hasan - Computers in Human Behavior, 2019"],"snippet":"… tweets. Words-to-vectors mapping is based on the deep neural network learning GloVe (Pennington, Socher, & Manning, 2014) embedding space built from the Common Crawl Web Data (42 Billion tokens, 1.9M vocab). The …","url":["https://www.sciencedirect.com/science/article/pii/S0747563219301608"]} +{"year":"2019","title":"FIESTA: Fast IdEntification of State-of-The-Art models using adaptive bandit algorithms","authors":["HB Moss, A Moore, DS Leslie, P Rayson - arXiv preprint arXiv:1906.12230, 2019"],"snippet":"… optimiser settings and the same regularisation. All words are lower cased and we use the same Glove common crawl 840B token 300 dimension word embedding (Pennington et al., 2014). We use variational (Gal and Ghahramani …","url":["https://arxiv.org/pdf/1906.12230"]} +{"year":"2019","title":"Figurative Usage Detection of Symptom Words to Improve Personal Health Mention Detection","authors":["A Iyer, A Joshi, S Karimi, R Sparks, C Paris - arXiv preprint arXiv:1906.05466, 2019"],"snippet":"… The first four are a random initialisation, and three pre-trained embeddings. The pretrained embeddings are: (a) word2vec (Mikolov et al., 2013); (b) GloVe (trained on Common Crawl) (Pennington et al., 2014); and, (c) Numberbatch (Speer et al., 2017) …","url":["https://arxiv.org/pdf/1906.05466"]} +{"year":"2019","title":"Finding Generalizable Evidence by Learning to Convince Q&A Models","authors":["E Perez, S Karamcheti, R Fergus, J Weston, D Kiela… - arXiv preprint arXiv …, 2019"],"snippet":"… fastText We define a function BoWFT that computes the average bag-of-words representation of some text using fastText embeddings (Joulin et al., 2017). We use 300-dimensional fastText word vectors pretrained on Common Crawl …","url":["https://arxiv.org/pdf/1909.05863"]} +{"year":"2019","title":"Findings of the First Shared Task on Machine Translation Robustness","authors":["X Li, P Michel, A Anastasopoulos, Y Belinkov… - arXiv preprint arXiv …, 2019"],"snippet":"… To explore effective approaches to leverage abundant out-of-domain parallel data. • To explore novel approaches to leverage abundant monolingual data on the Web (eg, tweets, Reddit comments, commoncrawl, etc.). • To …","url":["https://arxiv.org/pdf/1906.11943"]} +{"year":"2019","title":"Findings of the WMT 2019 Shared Task on Parallel Corpus Filtering for Low-Resource Conditions","authors":["P Koehn, F Guzmán, V Chaudhary, J Pino - Proceedings of the Fourth Conference on …, 2019"],"snippet":"… Corpus Sentences Words Wikipedia Sinhala 155,946 4,695,602 Nepali 92,296 2,804,439 English 67,796,935 1,985,175,324 CommonCrawl Sinhala 5,178,491 110,270,445 Nepali 3,562,373 102,988,609 English 380,409,891 8,894,266,960 …","url":["https://www.aclweb.org/anthology/W19-5404"]} +{"year":"2019","title":"FlauBERT: Unsupervised Language Model Pre-training for French","authors":["H Le, L Vial, J Frej, V Segonne, M Coavoux… - arXiv preprint arXiv …, 2019"],"snippet":"… Common Crawl).11 The data were collected from three main sources: (1) monolingual data for French provided in WMT19 shared tasks (Li et al., 2019, 4 sub-corpora); (2) French text corpora offered in the OPUS collection …","url":["https://arxiv.org/pdf/1912.05372"]} +{"year":"2019","title":"Frame Augmented Alternating Attention Network for Video Question Answering","authors":["W Zhang, S Tang, Y Cao, S Pu, F Wu, Y Zhuang - IEEE Transactions on Multimedia, 2019"],"snippet":"Page 1. 1520-9210 (c) 2019 IEEE. Personal use is permitted, but republication/ redistribution requires IEEE permission. See http://www.ieee.org/ publications_standards/publications/rights/index.html for more information. This …","url":["https://ieeexplore.ieee.org/abstract/document/8811730/"]} +{"year":"2019","title":"Frequency, acceptability, and selection: A case study of clause-embedding","authors":["AS White, K Rawlins"],"snippet":"Page 1. Frequency, acceptability, and selection: A case study of clause-embedding Aaron Steven White University of Rochester aaron.white@rochester.edu Kyle Rawlins Johns Hopkins University kgr@jhu.edu Abstract We investigate …","url":["https://ling.auf.net/lingbuzz/004596/current.pdf"]} +{"year":"2019","title":"From Legal to Technical Concept: Towards an Automated Classification of German Political Twitter Postings as Criminal Offenses","authors":["F Zufall, T Horsmann, T Zesch"],"snippet":"… We use a bi-directional LSTM (Hochreiter and Schmidhuber, 1997) for classification.30 We use the 300-dimensional German pre-trained word embeddings provided by Grave et al. (2018), which are trained on the German common crawl …","url":["https://www.researchgate.net/profile/Frederike_Zufall/publication/331475806_From_Legal_to_Technical_Concept_Towards_an_Automated_Classification_of_German_Political_Twitter_Postings_as_Criminal_Offenses/links/5ccbe9b0a6fdcc4719838905/From-Legal-to-Technical-Concept-Towards-an-Automated-Classification-of-German-Political-Twitter-Postings-as-Criminal-Offenses.pdf"]} +{"year":"2019","title":"Frontiersinpatternrecognitionandartificialintelligence","authors":["B Marleah, N Nicola, SC Yee - 2019"],"snippet":""} +{"year":"2019","title":"Frowning Frodo, Wincing Leia, and a Seriously Great Friendship: Learning to Classify Emotional Relationships of Fictional Characters","authors":["E Kim, R Klinger - arXiv preprint arXiv:1903.12453, 2019"],"snippet":"… We obtain word vectors for the embedding layer from GloVe (pre-trained on Common Crawl, d = 300, Pennington et al., 2014) and initialize out- of-vocabulary terms with zeros (including the po- sition indicators). 4 Experiments Experimental Setting …","url":["https://arxiv.org/pdf/1903.12453"]} +{"year":"2019","title":"Fusing Vector Space Models for Domain-Specific Applications","authors":["L Rettig, J Audiffren, P Cudré-Mauroux - arXiv preprint arXiv:1909.02307, 2019"],"snippet":"… Despite the convenience they bring, using such readilyavailable, pre-trained models is often suboptimal in vertical applications [2], [3]; as these models are pre-trained on large, non-specific sources (eg, Wikipedia and the Common …","url":["https://arxiv.org/pdf/1909.02307"]} +{"year":"2019","title":"Gating Mechanisms for Combining Character and Word-level Word Representations: An Empirical Study","authors":["JA Balazs, Y Matsuo - arXiv preprint arXiv:1904.05584, 2019"],"snippet":"Page 1. Gating Mechanisms for Combining Character and Word-level Word Representations: An Empirical Study Jorge A. Balazs and Yutaka Matsuo Graduate School of Engineering The University of Tokyo {jorge, matsuo}@weblab.tu-tokyo.ac.jp Abstract …","url":["https://arxiv.org/pdf/1904.05584"]} +{"year":"2019","title":"General Purpose Vector Representation for Swedish Documents: An application of Neural Language Models","authors":["S Hedström - 2019"],"snippet":"Page 1. General Purpose Vector Representation for Swedish Documents An application of Neural Language Models Simon Hedström Master's Thesis in Engineering Physics, Department of Physics, Umeå University, 2019 Page …","url":["https://umu.diva-portal.org/smash/get/diva2:1323994/FULLTEXT01.pdf"]} +{"year":"2019","title":"Generalizable prediction of academic performance from short texts on social media","authors":["I Smirnov - arXiv preprint arXiv:1912.00463, 2019"],"snippet":"… We obtained significantly better results with a model that used word-embeddings (see Methods). We also find that embeddings trained on the VK corpus outperform models trained on the Wikipedia and Common Crawl corpora (Table 1). 3 Page 4 …","url":["https://arxiv.org/pdf/1912.00463"]} +{"year":"2019","title":"Generalizing Question Answering System with Pre-trained Language Model Fine-tuning","authors":["D Su, Y Xu, GI Winata, P Xu, H Kim, Z Liu, P Fung - … of the 2nd Workshop on Machine …, 2019"],"snippet":"… (2009)) and Common Crawl (Buck et al., 1https://github.com/mrqa/MRQA-SharedTask-2019 2014) for pre-training … 2014. N-gram counts and language models from the common crawl. In LREC, volume 2, page 4. Citeseer …","url":["https://mrqa.github.io/assets/papers/63_Paper.pdf"]} +{"year":"2019","title":"Generating Black-Box Adversarial Examples for Text Classifiers Using a Deep Reinforced Model","authors":["P Vijayaraghavan, D Roy - arXiv preprint arXiv:1909.07873, 2019"],"snippet":"… These paraphrase datasets together contains text from various sources: Common Crawl, CzEng1.6, Europarl, News Commentary, Quora questions, and Twitter trending topic tweets. We do not use all the data for our pretraining …","url":["https://arxiv.org/pdf/1909.07873"]} +{"year":"2019","title":"Generating composite SQL queries from natural language","authors":["M De Groote - 2018"],"snippet":"… of the questions. We decided to use the Common Crawl embedding that is trained on 42 billion tokens, consists of a vocabulary of 1.9 million tokens and embeds these tokens in the 300-dimensional vector space5. All the words …","url":["https://lib.ugent.be/fulltxt/RUG01/002/494/903/RUG01-002494903_2018_0001_AC.pdf"]} +{"year":"2019","title":"Generating Language-Independent Neural Sentence Embeddings for Natural Language Classification Tasks","authors":["S Erhardt"],"snippet":"… [Rud17] At the time this thesis was written, there are Word Embeddings for more than 150 languages, trained on Common Crawl1 and Wikipedia, available. [Rud17] 1An open repository of web crawl data that can be …","url":["https://www.social.in.tum.de/fileadmin/w00bwc/www/Gerhard_Hagerer/thesis.pdf"]} +{"year":"2019","title":"Generic Web Content Extraction with Open-Source Software","authors":["A Barbaresi"],"snippet":"… Because of the vastly increasing variety of corpora, text types and use cases, it becomes more and more difficult to assess the usefulness and appropriateness of certain web texts 1https://commoncrawl.org for given research objectives …","url":["https://corpora.linguistik.uni-erlangen.de/data/konvens/proceedings/papers/kaleidoskop/camera_ready_barbaresi.pdf"]} +{"year":"2019","title":"Geo-spatial text-mining from Twitter–a feature space analysis with a view toward building classification in urban regions","authors":["M Häberle, M Werner, XX Zhu - European Journal of Remote Sensing, 2019"],"snippet":"Skip to Main Content …","url":["https://www.tandfonline.com/doi/full/10.1080/22797254.2019.1586451"]} +{"year":"2019","title":"Ghmerti at SemEval-2019 Task 6: A Deep Word-and Character-based Approach to Offensive Language Identification","authors":["E Doostmohammadi, H Sameti, A Saffar - … of the 13th International Workshop on …, 2019"],"snippet":"… The indices include 256 of the most common characters, plus 0 for padding and 1 for un- known characters. 2. xw which is the embeddings of the words in the input tweet based on FastText's 600Btoken common crawl model (Mikolov et al., 2018) …","url":["https://www.aclweb.org/anthology/S19-2110"]} +{"year":"2019","title":"GLOSS: Generative Latent Optimization of Sentence Representations","authors":["SP Singh, A Fan, M Auli - arXiv preprint arXiv:1907.06385, 2019"],"snippet":"… representations. This could be as simple as using a bag-of-words averaging of Glove (Pennington et al., 2014) word embeddings trained on a corpus such as CommonCrawl, which we re- fer to as Glove-BoW. Methods such …","url":["https://arxiv.org/pdf/1907.06385"]} +{"year":"2019","title":"GraphVite: A High-Performance CPU-GPU Hybrid System for Node Embedding","authors":["Z Zhu, S Xu, M Qu, J Tang - arXiv preprint arXiv:1903.00757, 2019"],"snippet":"Page 1. GraphVite: A High-Performance CPU-GPU Hybrid System for Node Embedding Zhaocheng Zhu Mila - Québec AI Institute Université de Montréal zhaocheng.zhu@ umontreal.ca Shizhen Xu Tsinghua University xsz12@mails.tsinghua.edu.cn …","url":["https://arxiv.org/pdf/1903.00757"]} +{"year":"2019","title":"Green AI","authors":["R Schwartz, J Dodge, NA Smith, O Etzioni - arXiv preprint arXiv:1907.10597, 2019"],"snippet":"… For example, the June 2019 Common Crawl contains 242 TB of uncompressed data,12 so even simple filtering to extract usable text is difficult … 11https://opensource.google.com/ projects/open-images-dataset 12http://commoncrawl.org/2019/07 …","url":["https://arxiv.org/pdf/1907.10597"]} +{"year":"2019","title":"Grounded Response Generation Task at DSTC7","authors":["M Galley, C Brockett, X Gao, J Gao, B Dolan"],"snippet":"… Turn 4 still pretty incredible , but quite a bit different that 10,000 meters . Table 1: Sample of the DSTC7 Sentence Generation data, which combines Reddit data (Turns 1-4) along with documents (extracted from Common Crawl) discussed in the conversations …","url":["http://workshop.colips.org/dstc7/papers/DSTC7_Task_2_overview_paper.pdf"]} +{"year":"2019","title":"Happy Together: Learning and Understanding Appraisal From Natural Language","authors":["A Rajendran, C Zhang, M Abdul-Mageed"],"snippet":"… language models (ULMFiT). Exploiting Simple GloVe Embeddings For the embedding layer, we obtain the 300-dimensional embedding vector for tokens using GloVe's Common Crawl pre-trained model [13]. GloVe embeddings …","url":["https://mageed.sites.olt.ubc.ca/files/2019/01/AffCon_aaai2019_happyDB.pdf"]} +{"year":"2019","title":"HARE: a Flexible Highlighting Annotator for Ranking and Exploration","authors":["D Newman-Griffis, E Fosler-Lussier - arXiv preprint arXiv:1908.11302, 2019"],"snippet":"… ated three commonly used benchmark embedding sets: word2vec skipgram (Mikolov et al., 2013) using GoogleNews,6 FastText skipgram with subword information on WikiNews,7 and GloVe (Pennington et al., 2014) on 840 …","url":["https://arxiv.org/pdf/1908.11302"]} +{"year":"2019","title":"HATEMINER at SemEval-2019 Task 5: Hate speech detection against Immigrants and Women in Twitter using a Multinomial Naive Bayes Classifier","authors":["N Chakravartula - Proceedings of the 13th International Workshop on …, 2019"],"snippet":"… Word Embeddings: Glove840B - common crawl, GloveTwitter27B - twitter crawl (Pennington et al., 2014) and fasttext - common crawl (Mikolov et al., 2018) pre-trained word embeddings are used to analyze their impact on the classification …","url":["https://www.aclweb.org/anthology/S19-2071"]} +{"year":"2019","title":"HealthSuggestions: moving beyond the beta version","authors":["PMP dos Santos - 2019"],"snippet":"Page 1. FACULDADE DE ENGENHARIA DA UNIVERSIDADE DO PORTO Health Suggestions: Moving Beyond the Beta Version Paulo Miguel Pereira dos Santos Master in Informatics and Computing Engineering Supervisor …","url":["https://repositorio-aberto.up.pt/bitstream/10216/121948/2/347008.2.pdf"]} +{"year":"2019","title":"Hierarchical Meta-Embeddings for Code-Switching Named Entity Recognition","authors":["GI Winata, Z Lin, J Shin, Z Liu, P Fung - arXiv preprint arXiv:1909.08504, 2019"],"snippet":"… We use FastText word embeddings trained from Common Crawl and Wikipedia (Grave et al., 2018) for English (es), Spanish (es), including four Romance languages: Catalan (ca), Portuguese (pt), French (fr), Italian …","url":["https://arxiv.org/pdf/1909.08504"]} +{"year":"2019","title":"High Quality ELMo Embeddings for Seven Less-Resourced Languages","authors":["M Ulčar, M Robnik-Šikonja - arXiv preprint arXiv:1911.10049, 2019"],"snippet":"… They used 20-million-words data randomly sampled from the raw text released by the CoNLL 2017 Shared Task - Automatically Annotated Raw Texts and Word Embeddings (Ginter et al., 2017), which is a combination of Wikipedia dump and common crawl …","url":["https://arxiv.org/pdf/1911.10049"]} +{"year":"2019","title":"Hitachi at MRP 2019: Unified Encoder-to-Biaffine Network for Cross-Framework Meaning Representation Parsing","authors":["Y Koreeda, G Morio, T Morishita, H Ozaki, K Yanai - arXiv preprint arXiv:1910.01299, 2019"],"snippet":"… Named entity label Named entity (NE) recognition is applied to the input text (see Section 7.1). GloVe We use 300-dimensional GloVe (Pennington et al., 2014) pretrained on Common Crawl2 which are kept fixed during the training …","url":["https://arxiv.org/pdf/1910.01299"]} +{"year":"2019","title":"HMM, Is This Ethical? Predicting the Ethics of Reddit Life Protips","authors":["M Coots, P Lu, L Wang"],"snippet":"… large corpus of text. GloVe representations have been trained on several large datasets that are publicly available, including corpuses from Wikipedia, Gigaword, Twitter, and Common Crawl [4]. 3. Task Definition Our problem is …","url":["https://madisoncoots.com/files/ethics.pdf"]} +{"year":"2019","title":"How Decoding Strategies Affect the Verifiability of Generated Text","authors":["L Massarelli, F Petroni, A Piktus, M Ott, T Rocktäschel… - arXiv preprint arXiv …, 2019"],"snippet":"… consisting of roughly 3 Billion Words; (iv) CC- NEWS, a de-duplicated subset of the English portion of the CommonCrawl news dataset (Nagel, 2016; Bakhtin et al., 2019; Liu et al., 2019a), which totals around 16 Billion words …","url":["https://arxiv.org/pdf/1911.03587"]} +{"year":"2019","title":"How to Ask Better Questions? A Large-Scale Multi-Domain Dataset for Rewriting Ill-Formed Questions","authors":["Z Chu, M Chen, J Chen, M Wang, K Gimpel, M Faruqui… - arXiv preprint arXiv …, 2019"],"snippet":"… and En↔Fr. The English-German translation models are trained on WMT datasets, including News Commentary 13, Europarl v7, and Common Crawl, and evaluated on newstest2013 for early stopping. On the newstest2013 …","url":["https://arxiv.org/pdf/1911.09247"]} +{"year":"2019","title":"How Well Do Embedding Models Capture Non-compositionality? A View from Multiword Expressions","authors":["N Nandakumar, T Baldwin, B Salehi - Proceedings of the 3rd Workshop on Evaluating …, 2019"],"snippet":"… It tokenises text at the character level. fastText We used the 300-dimensional fastText model pre-trained on Common Crawl and Wikipedia using CBOW (fastTextpre), as well as one trained over the same Wikipedia corpus4 us- ing skip-gram (fastText) …","url":["https://www.aclweb.org/anthology/W19-2004"]} +{"year":"2019","title":"Hybrid Rule-Based Model for Phishing URLs Detection","authors":["KS Adewole, AG Akintola, SA Salihu, N Faruk… - International Conference for …, 2019","N Faruk, RG Jimoh - … International Conference, iCETiC 2019, London, UK …, 2019"],"snippet":"… 1. From this figure, data collected from different servers such as Yahoo, Alexa, Common Crawl, PhishTank and OpenPhish are preprocessed in order to extract meaningful features that can be used for categorizing phishing websites from legitimate ones …","url":["http://books.google.de/books?hl=en&lr=lang_en&id=QF6mDwAAQBAJ&oi=fnd&pg=PA119&dq=commoncrawl&ots=T7vreYeKah&sig=sO3M90XucnzXO7OeF6horBncwb4","https://link.springer.com/chapter/10.1007/978-3-030-23943-5_9"]} +{"year":"2019","title":"Hybrid Words Representation for Airlines Sentiment Analysis","authors":["U Naseem, SK Khan, I Razzak, IA Hameed"],"snippet":"… GloVe uses ratios of co-occurrence probabilities. It is favourable to concatenate ELMo embeddings with traditional word embeddings. In this work, we have used pre-trained GloVe embedding (trained on 840 billion token from common crawl) of 300 dimensions …","url":["https://www.researchgate.net/profile/Ibrahim_Hameed/publication/336579383_Hybrid_Words_Representation_for_Airlines_Sentiment_Analysis/links/5da6e53892851caa1ba6f8c6/Hybrid-Words-Representation-for-Airlines-Sentiment-Analysis.pdf"]} +{"year":"2019","title":"Hyper: Distributed Cloud Processing for Large-Scale Deep Learning Tasks","authors":["D Buniatyan - arXiv preprint arXiv:1910.07172, 2019"],"snippet":"… [4] MinIO high performance object storage server compatible with Amazon S3 API. https://github.com/minio/minio, 2018. [Online; accessed 31- May-2019]. [5] Common Crawl Dataset. https://commoncrawl.org, 2019. [Online; accessed 31-May-2019] …","url":["https://arxiv.org/pdf/1910.07172"]} +{"year":"2019","title":"Hyperparameter Tuning for Deep Learning in Natural Language Processing","authors":["A Aghaebrahimian, M Cieliebak - 2019"],"snippet":"… on the Common Crawl, one on 42 and the other on 840 billion tokens), FastText (Bojanowski et al., 2016), dependency based (Levy and Goldberg, 2014), and ELMo (Peters et al., 2018). As shown in Ta- ble 1, the Glove …","url":["http://ceur-ws.org/Vol-2458/paper5.pdf"]} +{"year":"2019","title":"Identification Of Bot Accounts In Twitter Using 2D CNNs On User-generated Contents","authors":["M Polignano, MG de Pinto, P Lops, G Semeraro - 2019"],"snippet":"… FastTextEmb)8: 300 dimensionality vectors, composed by a vocabulary of 2 million words and n-grams of the words, case sensitive and obtained from 600 billion of tokens trained on data crawled from generic Internet web pages by Common Crawl nonprofit organization; …","url":["https://www.researchgate.net/profile/Marco_Polignano/publication/334636395_Identification_Of_Bot_Accounts_In_Twitter_Using_2D_CNNs_On_User-generated_Contents/links/5d373c10a6fdcc370a59e892/Identification-Of-Bot-Accounts-In-Twitter-Using-2D-CNNs-On-User-generated-Contents.pdf"]} +{"year":"2019","title":"Identification of Good and Bad News on Twitter","authors":["P Aggarwal, A Aker"],"snippet":"… We use tf-idf representation for each vocabulary term. 5.2.3 Embeddings Finally, we also use fasttext based embedding (Mikolov et al., 2018) vectors which are trained on common crawl having 600 billion tokens. 5.3 Classifiers …","url":["https://www.researchgate.net/profile/Ahmet_Aker3/publication/334825190_Identification_of_Good_and_Bad_News_on_Twitter/links/5d42c34992851cd04697548a/Identification-of-Good-and-Bad-News-on-Twitter.pdf"]} +{"year":"2019","title":"Identifying and Addressing Structural Inequalities in the Representativeness of Geographic Technologies","authors":["IL Johnson - 2019"],"snippet":"… knowledge graphs (Wikipedia and Google [289]), word embeddings (Wikipedia, Twitter, and Common Crawl in GloVe embeddings [238]), object detection (Instagram hashtags and Facebook [292])—and adding …","url":["http://search.proquest.com/openview/dccae6679751f41f283b33f555947aa8/1?pq-origsite=gscholar&cbl=18750&diss=y"]} +{"year":"2019","title":"Identifying transfer models for machine learning tasks","authors":["P Watson, B Bhattacharjee, NC CODELLA… - US Patent App. 15/982,622, 2019"],"snippet":"US20190354850A1 - Identifying transfer models for machine learning tasks - Google Patents. Identifying transfer models for machine learning tasks. Download PDF Info. Publication number US20190354850A1. US20190354850A1 …","url":["https://patents.google.com/patent/US20190354850A1/en"]} +{"year":"2019","title":"Idiap Abstract Text Summarization System for German Text Summarization Task","authors":["S Parida, P Motlicek - 2019"],"snippet":"… The experiments performed over 1http://opennmt.net/OpenNMT-py/ Summarization.html 2https://www.swisstext.org/ 3http://commoncrawl.org/ these datasets are described in the Section 4 (de- noted as S1 experimental …","url":["http://ceur-ws.org/Vol-2458/paper9.pdf"]} +{"year":"2019","title":"IIT Varanasi at HASOC 2019: Hate Speech and Offensive Content Identification in Indo-European Languages","authors":["A Mishra, S Pal - Proceedings of the 11th annual meeting of the Forum …"],"snippet":"… embedding. One of the pretrained glove embeddings is based on the common crawl which represents each word in the dimension of 300, and the other one is based on Twitter data which represents each word in the dimension of 200 …","url":["http://irlab.daiict.ac.in/~Parth/T3-22.pdf"]} +{"year":"2019","title":"IIT-BHU at CIQ 2019: Classification of Insincere Questions","authors":["A Mishra, S Pal"],"snippet":"… Different versions of glove pre-trained em- bedding exist; however, we use embedding trained of dimension 300 on common crawl using 840B tokens and 2.2M vocabulary3. We generated random embedding of dimension 300 for out of vocabulary words …","url":["http://irlab.daiict.ac.in/~Parth/T5-4.pdf"]} +{"year":"2019","title":"Impact of Debiasing Word Embeddings on Information Retrieval","authors":["E Gerritse - 2019"],"snippet":"… Bolukbasi et al. [1] show that there is a high correlation in bias in Word2Vec trained on Google News and Glove trained on the common crawl, so we still cannot infer whether the method or the dataset is more important for creating the bias …","url":["http://www.emmagerritse.com/pdfs/FDIA_2019_paper.pdf"]} +{"year":"2019","title":"Improved Quality Estimation of Machine Translation with Pre-trained Language Representation","authors":["G Miao, H Di, J Xu, Z Yang, Y Chen, K Ouchi - CCF International Conference on …, 2019"],"snippet":"… The former is mainly obtained from the open news datasets of the WMT17 and WMT18 MT evaluation tasks, including five data sets: Europarl v7, Europarl v12, Europarl v13, Common Crawl corpus, and Rapid corpus of EU press releases …","url":["https://link.springer.com/chapter/10.1007/978-3-030-32233-5_32"]} +{"year":"2019","title":"Improving Conditioning in Context-Aware Sequence to Sequence Models","authors":["X Wang, J Weston, M Auli, Y Jernite - arXiv preprint arXiv:1911.09728, 2019"],"snippet":"… 2019) for LFQA. The dataset consists of 272,000 complex questions and answer pairs, along with supporting documents created by gathering and concatenating passages from CommonCrawl web pages which are relevant to the question …","url":["https://arxiv.org/pdf/1911.09728"]} +{"year":"2019","title":"Improving Grammatical Error Correction via Pre-Training a Copy-Augmented Architecture with Unlabeled Data","authors":["W Zhao, L Wang, K Shen, R Jia, J Liu - arXiv preprint arXiv:1903.00138, 2019"],"snippet":"… We do not use reranking when evaluating the CoNLL-2014 data sets. But we rerank the top 12 hypothesizes us- ing the language model trained on Common Crawl (Junczys-Dowmunt and Grundkiewicz, 2016) for …","url":["https://arxiv.org/pdf/1903.00138"]} +{"year":"2019","title":"Improving Implicit Stance Classification in Tweets Using Word and Sentence Embeddings","authors":["R Schaefer, M Stede - Joint German/Austrian Conference on Artificial …, 2019"],"snippet":"… combinations. 4.2 fastText Embeddings. We use pre-trained 300-dimensional fastText [11] word vectors that have been trained on Wikipedia and Common Crawl data. For training, an extension of the CBOW model has been used …","url":["https://link.springer.com/chapter/10.1007/978-3-030-30179-8_26"]} +{"year":"2019","title":"Improving Named Entity Recognition with Commonsense Knowledge Pre-training","authors":["G Dekhili, NT Le, F Sadat - Pacific Rim Knowledge Acquisition Workshop, 2019"],"snippet":"… which is the concatenation of ConceptNet PPMI embeddings with Word2Vec embeddings trained on 100 billion words of Google News using skip-grams with negative sampling [14] and GloVe 1.2 embeddings trained on 840 billion words of the Common Crawl [16] …","url":["https://link.springer.com/chapter/10.1007/978-3-030-30639-7_2"]} +{"year":"2019","title":"Improving Neural Machine Translation of Subtitles with Finetun-ing","authors":["S Reinsperger - 2019"],"snippet":"… 3 Results 53 3.1 ParallelCorpora . . . . . 54 3.1.1 Europarl. . . . . 54 3.1.2 Common Crawl . . . . . 54 3.1.3 NewsCommentary . . . . . 56 3.1.4 Subtitles …","url":["http://www.simonrsp.com/masterthesis.pdf"]} +{"year":"2019","title":"Improving Neural Machine Translation Robustness via Data Augmentation: Beyond Back Translation","authors":["Z Li, L Specia - arXiv preprint arXiv:1910.03009, 2019"],"snippet":"… 3.1 Corpora We used all parallel corpora from the WMT19 Robustness Task on Fr↔En. For out-of-domain training, we used the WMT15 Fr↔En News Translation Task data, including Europarl v7, Common Crawl, UN, News Commentary v10, and Gigaword Corpora …","url":["https://arxiv.org/pdf/1910.03009"]} +{"year":"2019","title":"Improving Neural Machine Translation with Pre-trained Representation","authors":["R Weng, H Yu, S Huang, W Luo, J Chen - arXiv preprint arXiv:1908.07688, 2019"],"snippet":"… We use newstest2015 (NST15) as our validation set, and newstest2016 (NST16) as test sets 4. We use 40 million monolingual sentences from WMT-16 Common Crawl data-set … We use 5 million monolingual sentences …","url":["https://arxiv.org/pdf/1908.07688"]} +{"year":"2019","title":"Improving orienteering-based tourist trip planning with social sensing","authors":["F Persia, G Pilato, M Ge, P Bolzoni, D D'Auria… - Future Generation …, 2019"],"snippet":"… This is a popular technique in machine learning for uncovering subsymbolic meanings, such as word analogies. We utilized a pre-trained word vector encoding for Italian provided by fastText [32], which was trained on Common Crawl and Wikipedia …","url":["https://www.sciencedirect.com/science/article/pii/S0167739X19303929"]} +{"year":"2019","title":"Improving Quality Estimation of Machine Translation by Using Pre-trained Language Representation","authors":["G Miao, H Di, J Xu, Z Yang, Y Chen, K Ouchi - China Conference on Machine …, 2019","Y Chen, K Ouchi - Machine Translation: 15th China Conference, CCMT …, 2019"],"snippet":"… Metrics We first train the bilingual expert model [9] with large-scale parallel corpus released for the WMT17/WMT18 News Machine Translation Task, which mainly consists of five data sets, including Europarl v7, Europarl v12 …","url":["http://books.google.de/books?hl=en&lr=lang_en&id=WuK_DwAAQBAJ&oi=fnd&pg=PA11&dq=commoncrawl&ots=XTi4UL5q8i&sig=lCeqF4TBBuqQrg4EE0rN09FZeVs","https://link.springer.com/chapter/10.1007/978-981-15-1721-1_2"]} +{"year":"2019","title":"Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader","authors":["W Xiong, M Yu, S Chang, X Guo, WY Wang - arXiv preprint arXiv:1905.07098, 2019"],"snippet":"… Page 6. A Implementation Details Throughout our experiments, we use the 300-dimension GloVe embeddings trained on the Common Crawl corpus. The hidden dimension of LSTM and the dimension of entity embeddings are both 100 …","url":["https://arxiv.org/pdf/1905.07098"]} +{"year":"2019","title":"In-call virtual assistant","authors":["R Raanani, R Levy, MY Breakstone - US Patent App. 16/165,566, 2019"],"snippet":"… At the same time, natural language processing (NLP) approaches to both topic modeling and world-knowledge modeling, have become much more efficient due to the availability of large, freely accessible natural language corpora (eg, CommonCrawl), as well as freely …","url":["https://patentimages.storage.googleapis.com/b2/cd/2c/a7fa39e3002b4f/US20190057698A1.pdf"]} +{"year":"2019","title":"Incendiary News Detection","authors":["EB Coban, E Filatova - 2019"],"snippet":"… features. We run classification experiments for unigrams, and combination of uniand bi-grams. 10https://www.nltk.org/ 11http://scikit-learn.org 12https://github.com/ otuncelli/turkish-stemmer-python 13http://commoncrawl.org/ For …","url":["https://pdfs.semanticscholar.org/8c78/f9da879fc5936ef84dc7128db691d7042fef.pdf"]} +{"year":"2019","title":"Incorporating Domain Knowledge into Natural Language Inference on Clinical Texts","authors":["M Lu, Y Fang, F Yan, M Li - IEEE Access, 2019"],"snippet":"… two domain-specific corpus: • GloVe[CC]: GloVe embeddings [21], trained on Common Crawl. • fastText[BioASQ]: fastText embeddings [22], trained on PubMed abstracts from the BioASQ challenge [23]. • fastText[MIMIC-III]: fastText …","url":["https://ieeexplore.ieee.org/iel7/6287639/6514899/08701433.pdf"]} +{"year":"2019","title":"Incorporating Syntactic Knowledge in Neural Quality Estimation for Machine Translation","authors":["N Ye, Y Wang, D Cai - China Conference on Machine Translation, 2019"],"snippet":"… One is the large-scale bilingual dataset for training the feature extraction module. It comes from the parallel corpus of WMT machine translation task, including Europarl v7, Common Crawl corpus, News Commentary v11 and so on …","url":["https://link.springer.com/chapter/10.1007/978-981-15-1721-1_3"]} +{"year":"2019","title":"Inducing Relational Knowledge from BERT","authors":["Z Bouraoui, J Camacho-Collados, S Schockaert - arXiv preprint arXiv:1911.12753, 2019"],"snippet":"… As static word embeddings for the baselines, we will use the Skip-gram word vectors that were pre-trained from the 100B words Google News data set6 (SG-GN) and GloVe word vectors which were pre-trained from the …","url":["https://arxiv.org/pdf/1911.12753"]} +{"year":"2019","title":"Inducing Schema. org Markup from Natural Language Context","authors":["GK Shahi, D Nandini, S Kumari - Kalpa Publications in Computing, 2019"],"snippet":"… extension, in 2012 another data hub called Web Data Commons [5] came up with structured data extracted from the Common Crawl … 5http:// commoncrawl.org/ 6http://webdatacommons.org/ 7The WARC file format …","url":["https://easychair.org/publications/download/DXGr"]} +{"year":"2019","title":"Inferring Concept Hierarchies from Text Corpora via Hyperbolic Embeddings","authors":["M Le, S Roller, L Papaxanthos, D Kiela, M Nickel - arXiv preprint arXiv:1902.00913, 2019"],"snippet":"Page 1. Inferring Concept Hierarchies from Text Corpora via Hyperbolic Embeddings Matt Le1 and Stephen Roller1 and Laetitia Papaxanthos2 Douwe Kiela1 and Maximilian Nickel1 1Facebook AI Research, New York …","url":["https://arxiv.org/pdf/1902.00913"]} +{"year":"2019","title":"Information extraction","authors":["S Razniewski"],"snippet":"… 8 Page 9. Taxi [Panchenko et al., 2016] 1. Crawl domain-specific text corpora in addition to WP, Commoncrawl 2. Candidate hypernymy extraction 1. Via substrings • “biomedical science” isA “science” • “microbiology” isA “biology” • “toast with bacon” isA “toast” …","url":["https://www.mpi-inf.mpg.de/fileadmin/inf/d5/teaching/ws19-20_ie/5_Taxonomy_induction_coreference_disambiguation.pdf"]} +{"year":"2019","title":"InriaFBK Drawing Attention to Offensive Language at Germeval2019","authors":["M Corazza, S Menini, E Cabrio, S Tonelli, S Villata…"],"snippet":"… This is the main reason why we chose to use FastText embeddings (Bojanowski et al., 2016), pretrained on Common Crawl and Wikipedia 3. 4.3 Recurrent model We develop a simple recurrent neural network model and use it for all subtasks …","url":["https://corpora.linguistik.uni-erlangen.de/data/konvens/proceedings/papers/germeval/Germeval_Task_2_2019_paper_1.INRIA.pdf"]} +{"year":"2019","title":"Integrating Grammatical Features into CNN Model for Emotion Classification","authors":["AC Le - 2018 5th NAFOSTED Conference on Information and …, 2018"],"snippet":"… a sentence s = 11 In this study we used the vector set GloVe [16], it is pretrained word vectors for Common Crawl (glove.42B.300d) with 300 dimensions for word embeddings to use for English data. For Vietnamese emotion …","url":["https://ieeexplore.ieee.org/abstract/document/8606875/"]} +{"year":"2019","title":"Integrating UMLS for Early Detection of Sings of Anorexia","authors":["FM Plaza-del-Arco, P López-Úbeda, MC Dıaz-Galiano… - 2019"],"snippet":"… Specifically, we use Page 6. the available pre-trained statistical models for English ”en core web md” wich version is 1.2.0. It is composed of 685k keys, 20k unique vectors (300 dimensions) and it was trained on OntoNotes …","url":["http://www.dei.unipd.it/~ferro/CLEF-WN-Drafts/CLEF2019/paper_76.pdf"]} +{"year":"2019","title":"Integrating word embeddings and document topics with deep learning in a video classification framework","authors":["Z Kastrati, AS Imran, A Kurti - Pattern Recognition Letters, 2019"],"snippet":"… GloVe contains word embeddings for a vocabulary of 400K words trained on 42 billion words from Wikipedia pages and newswire, and fastText includes word embeddings for a vocabulary of 2 million words trained on 600 billion tokens from Common Crawl …","url":["https://www.sciencedirect.com/science/article/pii/S0167865519302326"]} +{"year":"2019","title":"Intelligent sentiment analysis approach using edge computing‐based deep learning technique","authors":["H Sankar, V Subramaniyaswamy, V Vijayakumar… - Software: Practice and Experience"],"snippet":"… Word2Vec, 300d, 3 Million, 100 Billion. Common Crawl, 300d, 42 Billion, 1.9 Million. Common Crawl, 300d, 840 Billion, 2.2 Million. The main drawback of unsupervised word embedding learning is that it does not hold the sentiment …","url":["https://onlinelibrary.wiley.com/doi/abs/10.1002/spe.2687"]} +{"year":"2019","title":"Interactive Language Learning by Question Answering","authors":["X Yuan, MA Cote, J Fu, Z Lin, C Pal, Y Bengio… - arXiv preprint arXiv …, 2019"],"snippet":"Page 1. Interactive Language Learning by Question Answering Xingdi Yuan♥∗ Marc-Alexandre Côté♥∗ Jie Fu♣♠ Zhouhan Lin♦♠ Christopher Pal♣♠ Yoshua Bengio♦♠ Adam Trischler♥ ♥Microsoft Research, Montréal ♣Polytechnique …","url":["https://arxiv.org/pdf/1908.10909"]} +{"year":"2019","title":"Interactive Machine Comprehension with Information Seeking Agents","authors":["X Yuan, J Fu, MA Cote, Y Tay, C Pal, A Trischler - arXiv preprint arXiv:1908.10449, 2019"],"snippet":"… Word embeddings are initialized by the 300-dimension fastText (Mikolov et al. 2018) vectors trained on Common Crawl (600B tokens), and are fixed during training. Character embeddings are initialized by 200-dimension random vectors …","url":["https://arxiv.org/pdf/1908.10449"]} +{"year":"2019","title":"Internet of Things Anomaly Detection using Multivariate Analysis","authors":["S Ezekiel, AA Alshehri, L Pearlstein, XW Wu, A Lutz - The 3rd ICICPE 2019 Conference …"],"snippet":"… Our model uses the GloVe (Pennington et al., 2014) 300-dimensional vectors trained on the Common Crawl corpus with 42B tokens as word level features, as this resulted in the best performance in preliminary experiments …","url":["http://icicpe.org/wp-content/uploads/2019/12/ICICPE-2019-vol.31.pdf#page=90"]} +{"year":"2019","title":"Iot-based call assistant device","authors":["R Raanani, R Levy, MY Breakstone - US Patent App. 16/168,663, 2019"],"snippet":"… At the same time, natural language processing (NLP) approaches to both topic modeling and world-knowledge modeling, have become much more efficient due to the availability of large, freely accessible natural language corpora (eg, CommonCrawl), as well as freely …","url":["https://patentimages.storage.googleapis.com/c3/a1/97/799532a8db7406/US20190057079A1.pdf"]} +{"year":"2019","title":"Iterative Keyword Optimization","authors":["A Elyashar, M Reuben, R Puzis"],"snippet":"… The model was trained on Common Crawl 4 and Wikipedia 5 using the fastText library 6. We used Euclidean as the distance measure … 4 http://commoncrawl.org/ 5 https://www.wikipedia.org/ 6 https://fasttext …","url":["http://sbp-brims.org/2019/proceedings/papers/working_papers/Elyashar.pdf"]} +{"year":"2019","title":"JHU 2019 Robustness Task System Description","authors":["M Post, K Duh - Proceedings of the Fourth Conference on Machine …, 2019"],"snippet":"… the best million lines each of CommonCrawl, Gigaword, and the UN corpus; and • the MTNT training data. Data sizes are indicated in Table 1. dataset segments words Europarl 2.0m 50.2m News Commentary 200k 4.4m …","url":["https://www.aclweb.org/anthology/W19-5366"]} +{"year":"2019","title":"Johns Hopkins University Submission for WMT News Translation Task","authors":["K Marchisio, YK Lal, P Koehn - Proceedings of the Fourth Conference on Machine …, 2019"],"snippet":"… sampled bitext (x2). ParaCrawl1 and Common Crawl2 are filtered similarly, and added to form the training set for the final models. We … Crawl. ParaCrawl and Common Crawl were combined into a single corpus before filtering …","url":["https://www.aclweb.org/anthology/W19-5329"]} +{"year":"2019","title":"Joint Training for Neural Machine Translation","authors":["Y Cheng"],"snippet":"Page 1. Springer Recognizing Theses Outstanding Ph.D. Research Yong Cheng Joint Neural Translation Training Machine for Page 2. Springer Theses Recognizing Outstanding Ph.D. Research Page 3. Aims and Scope The …","url":["http://books.google.de/books?hl=en&lr=lang_en&id=KIOrDwAAQBAJ&oi=fnd&pg=PR5&dq=commoncrawl&ots=vy1Stpb4X-&sig=1d6kjXbtaE3McDjxvY7O9-JJQOk"]} +{"year":"2019","title":"Jointly Learning to Align and Translate with Transformer Models","authors":["S Garg, S Peitz, U Nallasamy, M Paulik - arXiv preprint arXiv:1909.02074, 2019","SGSPU Nallasamy, M Paulik"],"snippet":"… by Vilar et al. (2006). We use all available bilingual data (Europarl v7, Common Crawl corpus, News Commentary v13 and Rapid corpus of EU press releases) excluding the ParalCrawl corpus. We remove sentences longer …","url":["https://arxiv.org/pdf/1909.02074","https://www.researchgate.net/profile/Stephan_Peitz/publication/336996532_Jointly_Learning_to_Align_and_Translate_with_Transformer_Models/links/5ec41124458515626cb813b1/Jointly-Learning-to-Align-and-Translate-with-Transformer-Models.pdf"]} +{"year":"2019","title":"JParaCrawl: A Large Scale Web-Based English-Japanese Parallel Corpus","authors":["M Morishita, J Suzuki, M Nagata - arXiv preprint arXiv:1911.10668, 2019"],"snippet":"… To select the candidate domains, we first identified the language of all the Common Crawl text data by CLD26 and counted how much … Since the crawled data stored on Common Crawl may not contain the entire website or might …","url":["https://arxiv.org/pdf/1911.10668"]} +{"year":"2019","title":"KaWAT: A Word Analogy Task Dataset for Indonesian","authors":["K Kurniawan - arXiv preprint arXiv:1906.09912, 2019"],"snippet":"… We used fastText pretrained embeddings introduced in (Bojanowskietal.,2017) and (Grave et al., 2018), which have been trained on Indonesian Wikipedia and Indonesian Wikipedia plus Common Crawl data respectively. We …","url":["https://arxiv.org/pdf/1906.09912"]} +{"year":"2019","title":"Keyphrase Extraction from Scholarly Articles as Sequence Labeling using Contextualized Embeddings","authors":["D Sahrawat, D Mahata, M Kulkarni, H Zhang… - arXiv preprint arXiv …, 2019"],"snippet":"… and OpenAI GPT-2 (small, medium). As a baseline, we also use 300 dimensional fixed embeddings from Glove2, Word2Vec3, and FastText4 (common-crawl, wiki-news). We also compare the proposed architecture against …","url":["https://arxiv.org/pdf/1910.08840"]} +{"year":"2019","title":"KiloGrams: Very Large N-Grams for Malware Classification","authors":["E Raff, W Fleming, R Zak, H Anderson, B Finlayson… - arXiv preprint arXiv …, 2019"],"snippet":"… A ccuracy s = 1 s = ⌈n/4⌉ Figure 1: Balanced Accuracy results (y-axis) on the Public PDF dataset as we increase then-gram size (x-axis, log-scale), and alter the hashing stride s. Using a hashing-stride retains more …","url":["https://arxiv.org/pdf/1908.00200"]} +{"year":"2019","title":"KIT's Submission to the IWSLT 2019 Shared Task on Text Translation","authors":["F Schneider, A Waibel"],"snippet":"… We made use of all allowed data, which is broken down in table 1. The allowed parallel data from WMT consists of Commoncrawl, CzEng (which makes up the vast majority of the parallel training data), Europarl, news commentrary and paracrawl …","url":["https://zenodo.eu/record/3525496/files/IWSLT2019_paper_30.pdf"]} +{"year":"2019","title":"Knowledge empowered prominent aspect extraction from product reviews","authors":["Z Luo, S Huang, KQ Zhu - Information Processing & Management, 2019"],"snippet":"Skip to main content …","url":["https://www.sciencedirect.com/science/article/pii/S0306457318305193"]} +{"year":"2019","title":"Knowledge Graph-Driven Conversational Agents","authors":["J Bockhorst, D Conathan, G Fung"],"snippet":"… We use a CNN with max pooling and pretrained Glove embeddings trained on the Common Crawl 840B dataset [6] [7]. By applying our CNN classifier as a straightforward 1-of-k document classification task, we are able to achieve …","url":["https://kr2ml.github.io/2019/papers/KR2ML_2019_paper_42.pdf"]} +{"year":"2019","title":"Knowledge-based Conversational Search","authors":["S Vakulenko - arXiv preprint arXiv:1912.06859, 2019"],"snippet":"Page 1. arXiv:1912.06859v1 [cs.IR] 14 Dec 2019 Page 2. Page 3. Knowledge-based Conversational Search DISSERTATION submitted in partial fulfillment of the requirements for the degree of Doktorin der Technischen Wissenschaften by Svitlana Vakulenko, MSc …","url":["https://arxiv.org/pdf/1912.06859"]} +{"year":"2019","title":"Kyoto University participation to the WMT 2019 news shared task","authors":["F Cromieres, S Kurohashi - Proceedings of the Fourth Conference on Machine …, 2019"],"snippet":"… Page 2. 164 3 Data preprocessing 3.1 Data used For bilingual data, we used the provided corpora: europarl (≈ 1.7M sentence pairs), common crawl(≈ 620k sentence pairs) and newscommentary (≈ 255k sentence pairs). We did not use the paracrawl corpus …","url":["https://www.aclweb.org/anthology/W19-5312"]} +{"year":"2019","title":"Language Modelling Makes Sense: Propagating Representations through WordNet for Full-Coverage Word Sense Disambiguation","authors":["D Loureiro, A Jorge - arXiv preprint arXiv:1906.10007, 2019"],"snippet":"… tokens in the sentence. We choose fastText (Bojanowski et al., 2017) embeddings (pretrained on CommonCrawl), which are biased towards morphology, and avoid Out-of-Vocabulary issues as explained in §2.1. We use fastText …","url":["https://arxiv.org/pdf/1906.10007"]} +{"year":"2019","title":"Language Models are Unsupervised Multitask Learners","authors":["A Radford, J Wu, R Child, D Luan, D Amodei…"],"snippet":"… A promising source of diverse and nearly unlimited text is web scrapes such as Common Crawl … Trinh & Le (2018) used Common Crawl in their work on commonsense reasoning but noted a large amount of documents “whose content are mostly unintelligible” …","url":["https://www.techbooky.com/wp-content/uploads/2019/02/Better-Language-Models-and-Their-Implications.pdf"]} +{"year":"2019","title":"Language Models with Pre-Trained (GloVe) Word Embeddings","authors":["L Rokach, B Shapira, V Makarenkov"],"snippet":"… Despite the huge size of the Common Crawl corpus, some words may not exist with the embeddings, so we set these words to random vectors, and use the same embeddings consistently if we encounter the same unseen word again in the text …","url":["https://deepai.org/publication/language-models-with-pre-trained-glove-word-embeddings"]} +{"year":"2019","title":"Large Memory Layers with Product Keys","authors":["G Lample, A Sablayrolles, MA Ranzato, L Denoyer… - arXiv preprint arXiv …, 2019","MA Ranzato, L Denoyer, H Jégou"],"snippet":"… Experiments Page 20. Dataset 20 ▶ Extracted from the public Common Crawl. ▶ 40 million English news articles in training set, 5000 in validation and test set each. ▶ Did not shuffle sentences, allowing the model to learn …","url":["https://arxiv.org/pdf/1907.05242","https://pdfs.semanticscholar.org/3a54/100803474df3b98e54a1693010d12c9718b5.pdf"]} +{"year":"2019","title":"Large Scale Linguistic Processing of Tweets to Understand Social Interactions among Speakers of Less Resourced Languages: The Basque Case","authors":["J Fernandez de Landa, R Agerri, I Alegria - Information, 2019"],"snippet":"… resourced languages such as Basque. However, FastText provides pre-trained models for many languages, including Basque [33] by using the common crawl data (http://commoncrawl.org). The Basque model they distribute …","url":["https://www.mdpi.com/2078-2489/10/6/212/pdf"]} +{"year":"2019","title":"Last-Mile TLS Interception: Analysis and Observation of the Non-Public HTTPS Ecosystem","authors":["XC de Carnavalet - 2019"],"snippet":"Page 1. Last-Mile TLS Interception: Analysis and Observation of the Non-Public HTTPS Ecosystem Xavier de Carné de Carnavalet A thesis in The Concordia Institute for Information Systems Engineering Presented …","url":["http://users.encs.concordia.ca/~mmannan/student-resources/Thesis-PhD-Carnavalet-2019.pdf"]} +{"year":"2019","title":"Latent Question Interpretation Through Parameter Adaptation","authors":["T Parshakova, F Rameau, A Serdega, I Kweon, DS Kim - IEEE/ACM Transactions on …, 2019"],"snippet":"… A. Implementation Details For the sake of reproducibility, we provide the technical details related to the implementation of our approach. First of all, the initial word embeddings are initialized with GloVe embeddings, which …","url":["https://www.researchgate.net/profile/Francois_Rameau/publication/334633405_Latent_Question_Interpretation_Through_Parameter_Adaptation/links/5d37e05ca6fdcc370a5a3a43/Latent-Question-Interpretation-Through-Parameter-Adaptation.pdf"]} +{"year":"2019","title":"Laying the foundations for benchmarking open data automatically: a method for surveying data portals from the whole web","authors":["A Sheffer Correa, F Soares Correa Da Silva - 20th Annual International Conference …, 2019"],"snippet":"… KEYWORDS Open Data, Common Crawl, CKAN, Socrata, ArcGIS, OpenDataSoft … Common Crawl conducts crawls once a month and persists all the content in Web Archive (WARC) file format to allow multibillion web page archives with hundreds of terabytes in size …","url":["https://dl.acm.org/citation.cfm?id=3325257"]} +{"year":"2019","title":"LCEval: Learned Composite Metric for Caption Evaluation","authors":["N Sharif, L White, M Bennamoun, W Liu, SAA Shah"],"snippet":"… Table 1: The details of pre-trained embeddings used in our experiments Name Source Dimensions Corpus Corpus Size Vocabulary Size GloVE 840B 300d [40] 300 Common Crawl 8.40E+11 2.20E+06 Word2vec Google 300d [34] …","url":["https://www.researchgate.net/profile/Naeha_Sharif2/publication/334760575_LCEval_Learned_Composite_Metric_for_Caption_Evaluation/links/5d429677a6fdcc370a715269/LCEval-Learned-Composite-Metric-for-Caption-Evaluation.pdf"]} +{"year":"2019","title":"Learning as the Unsupervised Alignment of Conceptual Systems","authors":["BD Roads, BC Love - arXiv preprint arXiv:1906.09012, 2019"],"snippet":"… We found that alignment correlations positively correlated with mapping accuracy across a variety of scenarios (Figure 3A-C). The three conceptual systems were derived from a Common Crawl text corpus (Pennington et …","url":["https://arxiv.org/pdf/1906.09012"]} +{"year":"2019","title":"Learning from Personal Longitudinal Dialog Data","authors":["C Welch, V Pérez-Rosas, JK Kummerfeld, R Mihalcea…"],"snippet":"… Message Embeddings: We also obtain word vector representations for each message using the GloVe Common Crawl pre-trained model.19 We chose this word embedding over other off-theshelf options because the Common …","url":["https://sentic.net/personal-longitudinal-dialog-data.pdf"]} +{"year":"2019","title":"Learning multilingual topics through aspect extraction from monolingual texts","authors":["J Huber, M Spiliopoulou - Proceedings of the Fifth International Workshop on …, 2019"],"snippet":"… Xu et al., 2018). It was trained on the CommonCrawl corpus, a general-purpose text corpus that includes text from several billion web pages; the GloVe embeddings were trained on 840 billion tokens. The GloVe set includes …","url":["http://www.aclweb.org/anthology/W19-0313"]} +{"year":"2019","title":"Learning Outside the Box: Discourse-level Features Improve Metaphor Identification","authors":["J Mu, H Yannakoudakis, E Shutova - arXiv preprint arXiv:1904.02246, 2019"],"snippet":"… To learn representations, we use several widelyused embedding methods:4 GloVe We use 300-dimensional pre-trained GloVe embeddings (Pennington et al., 2014) trained on the Common Crawl corpus as representations of a lemma and its arguments …","url":["https://arxiv.org/pdf/1904.02246"]} +{"year":"2019","title":"Learning Relational Fractals for Deep Knowledge Graph Embedding in Online Social Networks","authors":["J Zhang, L Tan, X Tao, D Wang, JJC Ying, X Wang - International Conference on Web …, 2019"],"snippet":"… Our twitter dataset was live streamed from a twitter API account and contains a maximum of 1675882 nodes and 160799842 links. The Google dataset was obtained from the repositories of common crawl and was sentilyzed from the stripped down WET file contents …","url":["https://link.springer.com/chapter/10.1007/978-3-030-34223-4_42"]} +{"year":"2019","title":"Learning to Generate Personalized Product Descriptions","authors":["G Elad, I Guy, K Radinsky, S Novgorodov, B Kimelfeld - 2019"],"snippet":"… For the title representation, we used fastText word embeddings2 pre-trained on Common Crawl and Wikipedia [25, 33], weighted based on each word's TF-IDF score [4].3 In addition, we included as features the participant's demo …","url":["http://www.kiraradinsky.com/files/Learning_to_Generate_Personalized_Product_Descriptions.pdf"]} +{"year":"2019","title":"Learning to Speak and Act in a Fantasy Text Adventure Game","authors":["J Urbanek, A Fan, S Karamcheti, S Jain, S Humeau… - arXiv preprint arXiv …, 2019","JUA Fan, SKSJS Humeau, EDT Rocktäschel…"],"snippet":"Page 1. Learning to Speak and Act in a Fantasy Text Adventure Game Jack Urbanek1 Angela Fan1,2 Siddharth Karamcheti1 Saachi Jain1 Samuel Humeau1 Emily Dinan1 Tim Rocktäschel1,3 Douwe Kiela1 Arthur Szlam1 Jason …","url":["https://arxiv.org/pdf/1903.03094","https://research.fb.com/wp-content/uploads/2019/11/Learning-to-Speak-and-Act-in-a-Fantasy-Text-Adventure-Game.pdf"]} +{"year":"2019","title":"Learning Word Ratings for Empathy and Distress from Document-Level User Responses","authors":["J Sedoc, S Buechel, Y Nachmany, A Buffone, L Ungar - arXiv preprint arXiv …, 2019"],"snippet":"… (2013) using 10-fold crossvalidation. For word embeddings we used off-the-shelf Fasttext subword embeddings (Mikolov et al., 2018).4 The embeddings are trained with subword information on Common Crawl (600B tokens) …","url":["https://arxiv.org/pdf/1912.01079"]} +{"year":"2019","title":"Leveraging Distributional and Relational Semantics for Knowledge Extraction from Textual Corpora","authors":["G ROSSIELLO, G SEMERARO, M DI CIANO - 2019"],"snippet":"Page 1. Page 2 …","url":["https://www.researchgate.net/profile/Gaetano_Rossiello/publication/333448156_Leveraging_Distributional_and_Relational_Semantics_for_Knowledge_Extraction_from_Textual_Corpora/links/5cee4fcca6fdcc18c8e9913b/Leveraging-Distributional-and-Relational-Semantics-for-Knowledge-Extraction-from-Textual-Corpora.pdf"]} +{"year":"2019","title":"Leveraging End-to-End Speech Recognition with Neural Architecture Search","authors":["A Baruwa, M Abisiga, I Gbadegesin, A Fakunle - arXiv preprint arXiv:1912.05946, 2019"],"snippet":"… We train a 3-gram, 5-gram and a 7-gram language model on common crawl 1. The relative performances are summarised in tables 1 and 2. Decoding is done by beam-searching for the output y that maximizes φ(c) given by …","url":["https://arxiv.org/pdf/1912.05946"]} +{"year":"2019","title":"Leveraging Hierarchical Representations for Preserving Privacy and Utility in Text","authors":["O Feyisetan, T Diethe, T Drake - arXiv preprint arXiv:1910.08917, 2019"],"snippet":"Page 1. Leveraging Hierarchical Representations for Preserving Privacy and Utility in Text Oluwaseyi Feyisetan Amazon sey@amazon.com Tom Diethe Amazon tdiethe@amazon.co.uk Thomas Drake Amazon draket@amazon.com …","url":["https://arxiv.org/pdf/1910.08917"]} +{"year":"2019","title":"Leveraging Pretrained Image Classifiers for Language-Based Segmentation","authors":["D Golub, R Martín-Martín, A El-Kishky, S Savarese - arXiv preprint arXiv:1911.00830, 2019"],"snippet":"… With Word2Vec we first embed the target labels l and the labels in the set of possible proxy labels in a shared vector space using 300-dimensional GloVe embeddings [29] trained on the Common Crawl 840B word corpus. For labels that contains multiple words …","url":["https://arxiv.org/pdf/1911.00830"]} +{"year":"2019","title":"Leveraging Unpaired Out-of-Domain Data for Image Captioning","authors":["X Chen, M Zhang, Z Wang, L Zuo, B Li, Y Yang - Pattern Recognition Letters, 2018"],"snippet":"Skip to main content …","url":["https://www.sciencedirect.com/science/article/pii/S0167865518309358"]} +{"year":"2019","title":"Leveraging Web Semantic Knowledge in Word Representation Learning","authors":["H Liu, L Fang, JG Lou, Z Li - 2019"],"snippet":"… We extract a large collection of semantic lists from the Common Crawl data7 using the patterns defined in Table 1 and filter out entries that do not exist in the vocabulary of the training data … 6http://dumps.wikimedia.org/enwiki/ 7http://commoncrawl.org/ Page 5 …","url":["https://www.aaai.org/Papers/AAAI/2019/AAAI-LiuHaoyan.142.pdf"]} +{"year":"2019","title":"Limsi-multisem at the ijcai semdeep-5 wic challenge: Context representations for word usage similarity estimation","authors":["AG Soler, M Apidianaki, A Allauzen - Proceedings of the 5th Workshop on Semantic …, 2019"],"snippet":"… Di- mensionality reduction is applied to a weighted average of the vectors of words in a sentence. Weighting is based on word frequency in Common Crawl. We use SIF in combination with 300- d GloVe vectors trained …","url":["https://www.aclweb.org/anthology/W19-5802"]} +{"year":"2019","title":"Lingua Custodia at WMT'19: Attempts to Control Terminology","authors":["F Burlot - arXiv preprint arXiv:1907.04618, 2019"],"snippet":"… to the decoder. Page 2. 2 Baseline The training parallel data provided for the task consisted of nearly 10M sentences, including Europarl (Koehn, 2005), Common-crawl, Newscommentary and Bicleaner07. The former was …","url":["https://arxiv.org/pdf/1907.04618"]} +{"year":"2019","title":"Linked Open Data Validity--A Technical Report from ISWS 2018","authors":["TA Ghor, E Agrawal, M Alam, O Alqawasmeh… - arXiv preprint arXiv …, 2019"],"snippet":"Page 1. Linked Open Data Validity A Technical Report from ISWS 2018 April 1, 2019 Bertinoro, Italy arXiv:1903.12554v1 [cs.DB] 26 Mar 2019 Page 2. Authors Main Editors Mehwish Alam, Semantic Technology Lab, ISTC-CNR …","url":["https://arxiv.org/pdf/1903.12554"]} +{"year":"2019","title":"Linking artificial and human neural representations of language","authors":["J Gauthier, R Levy - arXiv preprint arXiv:1910.01244, 2019"],"snippet":"… contrasts between the 384 sentences tested. 9We use publicly available GloVe vectors computed on Common Crawl, available in the spaCy toolkit as en vectors web lg. Page 6. 3 Results We first present the performance of …","url":["https://arxiv.org/pdf/1910.01244"]} +{"year":"2019","title":"LINSPECTOR: Multilingual Probing Tasks for Word Representations","authors":["GG Şahin, C Vania, I Kuznetsov, I Gurevych - arXiv preprint arXiv:1903.09442, 2019"],"snippet":"Page 1. LINSPECTOR Multilingual Probing Tasks for Word Representations Gözde Gül Sahin∗ UKP Lab / TU Darmstadt Clara Vania∗∗ ILCC / University of Edinburgh Ilia Kuznetsov UKP Lab / TU Darmstadt Iryna Gurevych UKP Lab / TU Darmstadt …","url":["https://arxiv.org/pdf/1903.09442"]} +{"year":"2019","title":"LIUM's Contributions to the WMT2019 News Translation Task: Data and Systems for German-French Language Pairs","authors":["F Bougares, J Wottawa, A Baillot, L Barrault, A Bardet - … 2: Shared Task Papers, Day 1 …, 2019"],"snippet":"… As it can be seen from tables 1 and 2, the effect of the cleaning step is more pronounced for the noisy parallel corpora (ie ParaCrawl and Common Crawl) … Page 3. 131 #lines #token FR #token DE europarl-v7 1.7M 45.9M 40.9 …","url":["https://www.aclweb.org/anthology/W19-5307"]} +{"year":"2019","title":"Local bow-tie structure of the web","authors":["Y Fujita, Y Kichikawa, Y Fujiwara, W Souma, H Iyetomi - Applied Network Science, 2019"],"snippet":"… This fact means that the absence of self-similarity between page level and host/domain levels. Meusel et al. (2014, 2015) investigated the publicly accessible crawl of the web gathered by the Common Crawl Foundation in 2012 (CC12) (Meusel et al. 2014; 2015) …","url":["https://link.springer.com/article/10.1007/s41109-019-0127-2"]} +{"year":"2019","title":"Logical Layout Analysis using Deep Learning","authors":["A Zulfiqar, A Ul-Hasan, F Shafait"],"snippet":"… of the text zones. GloVE provides 300 dimensional vectors, one vector for each word. We have used the one trained on common crawl having 840 billion tokens and vectors for a total of 2.2 million words. Since we also want …","url":["https://tukl.seecs.nust.edu.pk/members/projects/conference/Logical-Layout-Analysis-using-Deep-Learning.pdf"]} +{"year":"2019","title":"Longitudinal Analysis of Misuse of Bitcoin⋆","authors":["K Eldefrawy, A Gehani, A Matton"],"snippet":"… its labels). Seed data was used from previously published onion data sets, references to onions in a large collection of DNS resolver logs, and an open repository of (non-onion) web crawl data, called the Common Crawl. The …","url":["http://www.csl.sri.com/users/gehani/papers/ACNS-2019.Bitcoin_Study.pdf"]} +{"year":"2019","title":"Look Who's Talking: Inferring Speaker Attributes from Personal Longitudinal Dialog","authors":["C Welch, V Pérez-Rosas, JK Kummerfeld, R Mihalcea - arXiv preprint arXiv …, 2019"],"snippet":"… The word embedding inputs to the context encoder are 300 dimensional. 8 Features Word Embeddings: We obtain word vector representations for each message using the GloVe Common Crawl pre-trained model [12]. We …","url":["https://arxiv.org/pdf/1904.11610"]} +{"year":"2019","title":"Low Resource Sequence Tagging with Weak Labels","authors":["E Simpson, J Pfeiffer, I Gurevych"],"snippet":"… For FAMULUS, we use 300-dimensional German fastText embeddings (Grave et al. 2018), and for NER and PICO we use 300-dimensional English GloVe 3 embeddings trained on 840 billion tokens from Common Crawl. To …","url":["https://public.ukp.informatik.tu-darmstadt.de/UKP_Webpage/publications/2020/2020_AAAI_SE_LowResourceSequence.pdf"]} +{"year":"2019","title":"Low Supervision, Low Corpus size, Low Similarity! Challenges in cross-lingual alignment of word embeddings: An exploration of the limitations of cross-lingual word …","authors":["A Dyer - 2019"],"snippet":"Page 1. Low Supervision, Low Corpus size, Low Similarity! Challenges in cross-lingual alignment of word embeddings An exploration of the limitations of cross-lingual word embedding alignment in truly low resource scenarios Andrew Dyer …","url":["http://www.diva-portal.org/smash/get/diva2:1365879/FULLTEXT01.pdf"]} +{"year":"2019","title":"LSTM for Dialogue Breakdown Detection: Exploration of Different Model Types and Word Embeddings","authors":["M Hendriksen, A Leeuwenberg, MF Moens"],"snippet":"… The words are uncased. GloVe Common Crawl … The results presented in the Table 2, allow to conclude that GloVe Common Crawl demonstrate the best performance, the GloVe Twitter being the second best, the word2vec Google News is the worst. Page 9 …","url":["http://workshop.colips.org/wochat/@iwsds2019/documents/dbdc4-mariya-hendriksen-etal.pdf"]} +{"year":"2019","title":"LTL-UDE at SemEval-2019 Task 6: BERT and Two-Vote Classification for Categorizing Offensiveness","authors":["P Aggarwal, T Horsmann, M Wojatzki, T Zesch - … of the 13th International Workshop on …, 2019"],"snippet":"… word representations. The resulting posting vector is re-scaled into the range zero to one. We use the pre-trained embeddings provided by Mikolov et al. (2018), which are trained on the common crawl corpus. Classifiers We …","url":["https://www.aclweb.org/anthology/S19-2121"]} +{"year":"2019","title":"ltl. uni-due at SemEval-2019 Task 5: Simple but Effective Lexico-Semantic Features for Detecting Hate Speech in Twitter","authors":["H Zhang, M Wojatzki, T Horsmann, T Zesch - … of the 13th International Workshop on …, 2019"],"snippet":"… of LSTMs and CNNs (LSTM + CNN). We initialize all setups with the 300-dimensional word embeddings provided by Mikolov et al. (2018), which were trained on the common crawl corpus. Furthermore, in all setups, we use …","url":["https://www.aclweb.org/anthology/S19-2078"]} +{"year":"2019","title":"Machine Reading of Clinical Notes for Automated ICD Coding","authors":["M Morisio, S Malacrino"],"snippet":"Page 1. Master degree course in Computer Engineering Master Degree Thesis Machine Reading of Clinical Notes for Automated ICD Coding Supervisor Prof. Maurizio Morisio Candidate Stefano Malacrin`o Internship tutors …","url":["https://webthesis.biblio.polito.it/10958/1/tesi.pdf"]} +{"year":"2019","title":"Machine Translation of Restaurant Reviews: New Corpus for Domain Adaptation and Robustness","authors":["A Bérard, I Calapodescu, M Dymetman, C Roux… - arXiv preprint arXiv …, 2019"],"snippet":"… data, we built a new training corpus named UGC (User Generated Content), closer to our domain, by combining: Multi UN, OpenSubtitles, Wikipedia, Books, Tatoeba, TED talks, ParaCrawl11 and Gourmet12 (See Table 3) …","url":["https://arxiv.org/pdf/1910.14589"]} +{"year":"2019","title":"Mapping languages and demographics with georeferenced corpora","authors":["J Dunn, B Adams - 2019"],"snippet":"… To answer this question, we collect and analyze two large global-scale datasets: web-crawled data from the Common Crawl (16.65 billion words) and social media data from Twitter (4.14 billion words). This paper evaluates demographic-type informa …","url":["https://ir.canterbury.ac.nz/bitstream/handle/10092/17132/GeoComputation_19.pdf?sequence=2"]} +{"year":"2019","title":"Massive vs. Curated Word Embeddings for Low-Resourced Languages. The Case of Yor\\ub\\'a and Twi","authors":["JO Alabi, K Amponsah-Kaakyire, DI Adelani… - arXiv preprint arXiv …, 2019"],"snippet":"… The resource par excellence is Wikipedia2, an online encyclopedia currently available in 307 languages3. Other initiatives such as Common Crawl4 or the Jehovahs Witnesses site5 are also repositories for multilingual …","url":["https://arxiv.org/pdf/1912.02481"]} +{"year":"2019","title":"Massively multilingual transfer for NER","authors":["A Rahimi, Y Li, T Cohn - Proceedings of the 57th Conference of the Association …, 2019"],"snippet":"Page 1. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 151–164 Florence, Italy, July 28 - August 2, 2019. c 2019 Association for Computational Linguistics 151 Massively Multilingual Transfer for NER …","url":["https://www.aclweb.org/anthology/P19-1015"]} +{"year":"2019","title":"MASTER UNIVERSITARIO EN INGENIERÍA DE TELECOMUNICACION","authors":["DB SANCHEZ - 2019"],"snippet":"Page 1. M´ASTER UNIVERSITARIO EN INGENIERÍA DE TELECOMUNICACI´ON TRABAJO FIN DE M´ASTER DESING AND DEVELOPMENT OF A HATE SPEECH DETECTOR IN SOCIAL NETWORKS BASED ON DEEP LEARNING TECHNOLOGIES …","url":["http://oa.upm.es/55618/1/TESIS_MASTER_DIEGO_BENITO_SANCHEZ_2019.pdf"]} +{"year":"2019","title":"Measuring Gender Bias in Word Embeddings across Domains and Discovering New Gender Bias Word Categories","authors":["K Chaloner, A Maldonado - Proceedings of the First Workshop on Gender Bias in …, 2019"],"snippet":"… WEAT's authors applied these tests to the publicly-available GloVe embeddings trained on the English-language “Common Crawl” corpus (Pennington et al., 2014) as well as the Skip-Gram (word2vec) embeddings …","url":["https://www.aclweb.org/anthology/W19-3804"]} +{"year":"2019","title":"Medical Word Embeddings for Spanish: Development and Evaluation","authors":["F Soares, M Villegas, A Gonzalez-Agirre, M Krallinger… - Proceedings of the 2nd …, 2019"],"snippet":"… makes available Word2Vec models pre-trained on about 100 billion words from Google News corpus in English1. Regarding other languages, on FastText website2 one can download pre-trained embeddings for 157 lan …","url":["https://www.aclweb.org/anthology/W19-1916"]} +{"year":"2019","title":"Meemi: Finding the Middle Ground in Cross-lingual Word Embeddings","authors":["Y Doval, J Camacho-Collados, L Espinosa-Anke… - arXiv preprint arXiv …, 2019"],"snippet":"… 10 Page 11. the WaCky project [23], containing 2 and 0.8 billion words, respectively.6 For Finnish and Russian, we use their corresponding Common Crawl monolingual corpora from the Machine Translation of News Shared Task 20167, composed of …","url":["https://arxiv.org/pdf/1910.07221"]} +{"year":"2019","title":"Membership Inference Attacks on Sequence-to-Sequence Models","authors":["S Hisamoto, M Post, K Duh - arXiv preprint arXiv:1904.05506, 2019"],"snippet":"… For example, e (d) i with d = l1 and i = 1 might refer to the first sentence in the Europarl subcorpus, while e (d) i with d = l2 and i = 1 might refer to the first sentence in the CommonCrawl subcorpus … CommonCrawl 5,000 5,000 2,389,123 2,379,123 N/A …","url":["https://arxiv.org/pdf/1904.05506"]} +{"year":"2019","title":"Metaphor Interpretation Using Word Embeddings","authors":["K Bar, N Dershowitz, L Dankin"],"snippet":"… relatively large corpus. Specifically, we use DepCC,1 a dependency-parsed “web-scale corpus” based on CommonCrawl.2 There are 365 million documents in the corpus, comprising about 252B tokens. Among other preprocessing …","url":["https://pdfs.semanticscholar.org/2033/a3f7b8b53ea277a811ac450139422793b08b.pdf"]} +{"year":"2019","title":"Methods and apparatus for detection of malicious documents using machine learning","authors":["JD Saxe, R HARANG - US Patent App. 16/257,749, 2019"],"snippet":"… decision tree, etc.). The memory 120 includes one or more datasets 112 (eg, a VirusTotal dataset and/or a Common Crawl dataset, as described in further detail below) and one or more training models 124. The malware detection …","url":["https://patentimages.storage.googleapis.com/fa/f8/d7/5843fb31e01d95/US20190236273A1.pdf"]} +{"year":"2019","title":"Microsoft Research Asia's Systems for WMT19","authors":["Y Xia, X Tan, F Tian, F Gao, W Chen, Y Fan, L Gong…"],"snippet":"… Dataset We concatenate “Europarl v9”, “News Commentary v14”, “Common Crawl corpus” and “Document-split Rapid corpus” as the ba- sic bilingual … We merge the “commoncrawl”, “europarl-v7” and part of “de-fr.bicleaner07” …","url":["http://www.statmt.org/wmt19/pdf/WMT0048.pdf"]} +{"year":"2019","title":"MIDAS: A Dialog Act Annotation Scheme for Open Domain Human Machine Spoken Conversations","authors":["D Yu, Z Yu - arXiv preprint arXiv:1908.10023, 2019"],"snippet":"… An example can be seen in the last USER2 utterance in Table 1. Word em- beddings are pre-trained with fastText (Mikolov et al., 2018) using Common Crawl. We evaluate the segmentation model on human labeled 2K human utterances of collected data …","url":["https://arxiv.org/pdf/1908.10023"]} +{"year":"2019","title":"Mining Discourse Markers for Unsupervised Sentence Representation Learning","authors":["D Sileo, T Van-De-Cruys, C Pradel, P Muller - arXiv preprint arXiv:1903.11850, 2019"],"snippet":"… We use sentences from the Depcc corpus (Panchenko et al., 2017), which consists of En- glish texts harvested from commoncrawl web data … Word embeddings are fixed GloVe embeddings with 300 dimensions, trained …","url":["https://arxiv.org/pdf/1903.11850"]} +{"year":"2019","title":"Mix-review: Alleviate Forgetting in the Pretrain-Finetune Framework for Neural Language Generation Models","authors":["T He, J Liu, K Cho, M Ott, B Liu, J Glass, F Peng - arXiv preprint arXiv:1910.07117, 2019"],"snippet":"… For pre-training, we use the large-scale CCNEWS data (Bakhtin et al., 2019) which is a de- duplicated subset of the English portion of the CommonCrawl news data-set1. The dataset contains news articles published worldwide …","url":["https://arxiv.org/pdf/1910.07117"]} +{"year":"2019","title":"MLT-DFKI at CLEF eHealth 2019: Multi-label Classification of ICD-10 Codes with BERT","authors":["S Amin, G Neumann, K Dunfield, A Vechkaeva… - CLEF (Working Notes), 2019"],"snippet":"… have stronger linguistic signals to classify the classes where German models make mistakes [1]. The baseline proved to be a strong one, with the highest precision of all and outperforming HAN and CNN models, for both German …","url":["https://www.researchgate.net/profile/Saadullah_Amin2/publication/335681972_MLT-DFKI_at_CLEF_eHealth_2019_Multi-label_Classification_of_ICD-10_Codes_with_BERT/links/5d742a00299bf1cb809043cd/MLT-DFKI-at-CLEF-eHealth-2019-Multi-label-Classification-of-ICD-10-Codes-with-BERT.pdf"]} +{"year":"2019","title":"Mono-and Cross-lingual Semantic Word Similarity for Urdu Language","authors":["G Fatima - 2019"],"snippet":"Page 1. I Monoand Cross-lingual Semantic Word Similarity for Urdu Language By Ghazeefa Fatima CIIT/FA17-RCS-016/LHR MS Thesis In Computer Science COMSATS University Islamabad Lahore Campus Page …","url":["http://dspace.cuilahore.edu.pk/xmlui/bitstream/handle/123456789/1571/Thesis.pdf?sequence=1"]} +{"year":"2019","title":"MoRTy: Unsupervised Learning of Task-specialized Word Embeddings by Autoencoding","authors":["N Rethmeier, B Plank - Proceedings of the 4th Workshop on Representation …, 2019"],"snippet":"… Hence, we demonstrate the method's application for single-task, multi-task, small, medium and web-scale (common crawl) corpus-size settings (Section 4). Learning to scale-up by pretraining on more (un-)labeled data is both: (a) not always possible in low-resource …","url":["https://www.aclweb.org/anthology/W19-4307"]} +{"year":"2019","title":"Multi-class Document Classification Using Improved Word Embeddings","authors":["BA Rabut, AC Fajardo, RP Medina - Proceedings of the 2nd International Conference …, 2019"],"snippet":"… ACM ISBN 978-1-4503-7290-9/19/10…$15.00 https://doi.org/10.1145/3366650.3366661 42 Page 2. Common crawl)[7]. The pre-trained word embedding vectors serve as input in the classification algorithm for evaluation and prediction …","url":["https://dl.acm.org/citation.cfm?id=3366661"]} +{"year":"2019","title":"Multi-domain Dialogue State Tracking as Dynamic Knowledge Graph Enhanced Question Answering","authors":["L Zhou, K Small - arXiv preprint arXiv:1911.06192, 2019"],"snippet":"… For experiments with GloVe embeddings, we use GloVe embeddings pre-trained on Common Crawl dataset.3 The dimension of GloVe embeddings is 300, and the dimension of character-level embeddings is 100, such that Dw = 400 …","url":["https://arxiv.org/pdf/1911.06192"]} +{"year":"2019","title":"Multi-Granular Text Encoding for Self-Explaining Categorization","authors":["Z Wang, Y Zhang, M Yu, W Zhang, L Pan, L Song, K Xu… - arXiv preprint arXiv …, 2019"],"snippet":"… for each set. Hyperparameters We use the 300-dimensional GloVe word vectors pre-trained from the 840B Common Crawl corpus (Pennington et al., 2014), and set the hidden size as 100 for node embeddings. We apply dropout …","url":["https://arxiv.org/pdf/1907.08532"]} +{"year":"2019","title":"Multi-Hop Paragraph Retrieval for Open-Domain Question Answering","authors":["Y Feldman, R El-Yaniv - arXiv preprint arXiv:1906.06606, 2019"],"snippet":"Page 1. Multi-Hop Paragraph Retrieval for Open-Domain Question Answering Yair Feldman and Ran El-Yaniv Department of Computer Science Technion – Israel Institute of Technology Haifa, Israel {yairf11, rani}@cs.technion.ac.il Abstract …","url":["https://arxiv.org/pdf/1906.06606"]} +{"year":"2019","title":"Multi-Resolution Models for Learning Multilevel Abstract Representation with Application to Information Retrieval","authors":["T Cakaloglu - 2019"],"snippet":"Page 1. MULTI-RESOLUTION MODELS FOR LEARNING MULTILEVEL ABSTRACT REPRESENTATION WITH APPLICATION TO INFORMATION RETRIEVAL A Dissertation Submitted to the Graduate School University of Arkansas at Little Rock …","url":["http://search.proquest.com/openview/4bce4201a6d742c4c771e08b17dec0cb/1?pq-origsite=gscholar&cbl=18750&diss=y"]} +{"year":"2019","title":"Multi-Team: A Multi-attention, Multi-decoder Approach to Morphological Analysis.","authors":["A Ustün, R van der Goot, G Bouma, G van Noord"],"snippet":"… 2018). For FastText, two sets of pre-trained embeddings are available: one is trained only on Wikipedia (Bojanowski et al., 2017), whereas the newer versions are also trained on CommonCrawl (Grave et al., 2018). Whenever …","url":["http://www.robvandergoot.com/doc/sigmorphon2019.pdf"]} +{"year":"2019","title":"Multilingual Culture-Independent Word Analogy Datasets","authors":["M Ulčar, M Robnik-Šikonja - arXiv preprint arXiv:1911.10038, 2019"],"snippet":"… language is shown in the Table 6. Table 6: Percentage of constructed analogy pairs covered by the first 200,000 word vectors from common crawl fastText embeddings. Language Coverage (%) Croatian 81.67 English 97.05 …","url":["https://arxiv.org/pdf/1911.10038"]} +{"year":"2019","title":"Multilingual Fake News Detection with Satire","authors":["G Guibon, L Ermakova, H Seffih, A Firsov…"],"snippet":"… Detection of Deception. Non-verbal communication (2014), https://nvc.uvt.nl/pdf/7.pdf 6. Bevendorff, J., Stein, B., Hagen, M., Potthast, M.: Elastic chatnoir: Search engine for the clueweb and the common crawl. In: Pasi, G., Piwowarski …","url":["https://www.researchgate.net/profile/Guillaume_Le_Noe-Bienvenu/publication/332803834_Multilingual_Fake_News_Detection_with_Satire_on_Vaccination_Topic/links/5d24917a458515c11c1f8724/Multilingual-Fake-News-Detection-with-Satire-on-Vaccination-Topic.pdf"]} +{"year":"2019","title":"Multilingual is not enough: BERT for Finnish","authors":["A Virtanen, J Kanerva, R Ilo, J Luoma, J Luotolahti… - arXiv preprint arXiv …, 2019"],"snippet":"… Second, we selected texts from the Common Crawl project6 by running aa map-reduce language detection job on the plain text material from Common Crawl. These sources were supplemented with plain text extracted …","url":["https://arxiv.org/pdf/1912.07076"]} +{"year":"2019","title":"Multilingual Sentence-Level Bias Detection in Wikipedia","authors":["D Aleksandrova, F Lareau, PA Ménard"],"snippet":"… Same BOW n-gram size and BOW size and value type as SGD. 5Available for 157 languages, pretrained on Common Crawl and Wikipedia (Grave et al., 2018) https:// fasttext.cc/docs/en/crawl-vectors.html 6Version 0.21.2 of the sklearn toolkit …","url":["https://www.researchgate.net/profile/Desislava_Aleksandrova/publication/334612399_Multilingual_Sentence-Level_Bias_Detection_in_Wikipedia/links/5d5bd0c392851c37636bfdf2/Multilingual-Sentence-Level-Bias-Detection-in-Wikipedia.pdf"]} +{"year":"2019","title":"Multimodal deep networks for text and image-based document classification","authors":["N Audebert, C Herold, K Slimani, C Vidal - APIA"],"snippet":"… For both methods, we use the SpaCy small English model [33] to perform the tokenization and punctuation removal. Individual word embeddings are then inferred using FastText [29] pretrained on the Common Crawl dataset …","url":["https://www.irit.fr/pfia2019/wp-content/uploads/2019/07/Actes_CH_PFIA2019.pdf#page=14"]} +{"year":"2019","title":"Multimodal Machine Translation with Embedding Prediction","authors":["T Hirasawa, H Yamagishi, Y Matsumura, M Komachi - arXiv preprint arXiv …, 2019"],"snippet":"… model. “+ pretrained” models are initialized with pretrained embeddings. 2018). These word embeddings are trained on Wikipedia and Common Crawl using the CBOW algorithm, and the dimension is 300. The embedding …","url":["https://arxiv.org/pdf/1904.00639"]} +{"year":"2019","title":"Multimodal Sentiment Analysis Using Deep Learning","authors":["R Sharma, N Le Tan, F Sadat - 2018 17th IEEE International Conference on Machine …, 2018"],"snippet":"… For the CNN model we used pre-trained word embeddings (GloVe 840B.300d). This is a 300-dimensional word embedding trained on 840 billion tokens from the common crawl dataset. The maximum sequence length is 200 …","url":["https://ieeexplore.ieee.org/abstract/document/8614265/"]} +{"year":"2019","title":"Named entity recognition for Polish","authors":["M Marcińczuk, A Wawer - Poznan Studies in Contemporary Linguistics, 2019"],"snippet":"AbstractIn this article we discuss the current state-of-the-art for named entity recognition for Polish. We present publicly available resources and open-source tools for named entity recognition. The overview includes various …","url":["https://www.degruyter.com/view/j/psicl.2019.55.issue-2/psicl-2019-0010/psicl-2019-0010.xml"]} +{"year":"2019","title":"Named Entity Recognition for Social Media Text","authors":["Y Zhang - 2019"],"snippet":"… We use two different pre-trained word embeddings based on Common Crawl data, which contains 840 billion tokens and 2.2 million vocabulary and Twitter data which contains 2 billion tweets, 27 billion tokens, and 1.2 million vocabulary …","url":["https://uu.diva-portal.org/smash/get/diva2:1366031/FULLTEXT01.pdf"]} +{"year":"2019","title":"Named Entity Recognition Using Gazetteer of Hierarchical Entities","authors":["M Štravs, J Zupančič - … Conference on Industrial, Engineering and Other …, 2019"],"snippet":"… To summarize, the proposed entity recognition method was tested using two languages (Slovenian and English), six different distance measures, and two different vector embeddings from Wikipedia (Wiki WV) and Common Crawl (CC WV) …","url":["https://link.springer.com/chapter/10.1007/978-3-030-22999-3_65"]} +{"year":"2019","title":"Named-entity recognition in Czech historical texts: Using a CNN-BiLSTM neural network model","authors":["H Hubková - 2019"],"snippet":"… We also tried to work with published pretrained word embeddings of contemporary Czech words provided by fastText6. These were trained on more than 178 millions of tokens from Wikipedia and 13 billions tokens based on common crawl (Grave et al., 2018) …","url":["http://www.diva-portal.org/smash/get/diva2:1325355/FULLTEXT01.pdf"]} +{"year":"2019","title":"Natural Language Processing for Book Recommender Systems","authors":["H Alharthi - 2019"],"snippet":"Page 1. Natural Language Processing for Book Recommender Systems by Haifa Alharthi Thesis submitted in partial fulfillment of the requirements for the PhD degree in Computer Science School of Electrical Engineering and Computer Science Faculty of Engineering …","url":["https://www.ruor.uottawa.ca/bitstream/10393/39134/1/Alharthi_Haifa_2019_thesis.pdf"]} +{"year":"2019","title":"Natural language processing using context-specific word vectors","authors":["B McCann, C Xiong, R Socher - US Patent App. 15/982,841, 2018"],"snippet":"… in the second language. In some examples, training of an MT-LSTM of the encoder 310 uses fixed 300-dimensional word vectors, such as the CommonCrawl-840B GloVe model for English word vectors. These word vectors …","url":["https://patentimages.storage.googleapis.com/49/87/1a/0d4e316e8e4194/US20180373682A1.pdf"]} +{"year":"2019","title":"Naver Labs Europe's Systems for the WMT19 Machine Translation Robustness Task","authors":["A Bérard, I Calapodescu, C Roux - arXiv preprint arXiv:1907.06488, 2019"],"snippet":"… 3.1 Pre-processing CommonCrawl filtering We first spent efforts on filtering and cleaning the WMT data (in particular CommonCrawl) … We filtered CommonCrawl as follows: we trained a baseline FR→EN model on WMT without …","url":["https://arxiv.org/pdf/1907.06488"]} +{"year":"2019","title":"Nested Variational Autoencoder for Topic Modeling on Microtexts with Word Vectors","authors":["T Trinh, T Quan, T Mai - arXiv preprint arXiv:1905.00195, 2019"],"snippet":"Page 1. Noname manuscript No. (will be inserted by the editor) Nested Variational Autoencoder for Topic Modeling on Microtexts with Word Vectors Trung Trinh · Tho Quan · Trung Mai Received: date / Accepted: date Abstract …","url":["https://arxiv.org/pdf/1905.00195"]} +{"year":"2019","title":"NeuMorph: Neural Morphological Tagging for Low-Resource Languages—An Experimental Study for Indic Languages","authors":["A Chakrabarty, A Chaturvedi, U Garain - ACM Transactions on Asian and Low …, 2019"],"snippet":"Page 1. 16 NeuMorph: Neural Morphological Tagging for Low-Resource Languages— An Experimental Study for Indic Languages ABHISEK CHAKRABARTY, AKSHAY CHATURVEDI, and UTPAL GARAIN, Indian Statistical Institute, India …","url":["https://dl.acm.org/citation.cfm?id=3342354"]} +{"year":"2019","title":"Neural Conversation Recommendation with Online Interaction Modeling","authors":["X Zeng, J Li, L Wang, KF Wong"],"snippet":"Page 1. Neural Conversation Recommendation with Online Interaction Modeling Xingshan Zeng1,2, Jing Li3∗, Lu Wang4, Kam-Fai Wong1,2 1The Chinese University of Hong Kong, Hong Kong, China 2MoE Key Laboratory …","url":["https://www.ccs.neu.edu/home/luwang/papers/EMNLP2019_zeng_li_wang_wong.pdf"]} +{"year":"2019","title":"Neural Facet Detection on Medical Resources","authors":["T Steffek - 2019"],"snippet":"Page 1. Neural Facet Detection on Medical Resources Thomas Steffek April 2, 2019 Page 2. Page 3. Beuth Hochschule für Technik Fachbereich VI - Informatik und Medien Database Systems and Text-based Information Systems (DATEXIS) Bachelor's thesis …","url":["https://prof.beuth-hochschule.de/fileadmin/prof/aloeser/Bachelorarbeit_Thomas-Steffek_with-title-page-1.1.pdf"]} +{"year":"2019","title":"Neural Feature Extraction for Contextual Emotion Detection","authors":["E Mohammadi, H Amini, L Kosseim"],"snippet":"… pretrained word embeddings. As the first word embedder, we chose GloVe (Pennington et al., 2014), which is pretrained on 840B tokens of web data from Common Crawl, and provides 300d vectors as word embeddings. As our sec …","url":["https://www.researchgate.net/profile/Hessam_Amini/publication/335704122_Neural_Feature_Extraction_for_Contextual_Emotion_Detection/links/5d76d6764585151ee4ab0908/Neural-Feature-Extraction-for-Contextual-Emotion-Detection.pdf"]} +{"year":"2019","title":"Neural Grammatical Error Correction by Simulating the Human Learner and the Human Proofreader","authors":["F Gaim, JW Chung, JC Park - 한국정보과학회 학술발표논문집, 2018"],"snippet":"… For this and the contrastive learning, we use a large 5-gram language model trained on the Common Crawl data [8]. Training and Decoding: To effectively handle out-of- vocabulary words, we use sub-word level tokenization and …","url":["http://www.dbpia.co.kr/Journal/ArticleDetail/NODE07613671"]} +{"year":"2019","title":"Neural Machine Translation for English–Kazakh with Morphological Segmentation and Synthetic Data","authors":["A Toral, L Edman, G Yeshmagambetova, J Spenader - … 2: Shared Task Papers, Day 1 …, 2019"],"snippet":"… 7.5 0.19 0.16 Wikititles 117.0 0.23 0.19 Table 1: Preprocessed EN–KK parallel training data. Words (M) Corpus Sentences (k) EN RU Common crawl 871.8 20.82 19.97 News-comm … Corpus Threshold Pairs left (k) CommonCrawl 0.7323 568.50 News Comm …","url":["https://www.aclweb.org/anthology/W19-5343"]} +{"year":"2019","title":"Neural network learning engine","authors":["CM Ormerod - US Patent App. 16/286,566, 2019"],"snippet":"… skill and not to limit the invention to any one embodiment, commercial word embedding tools can include Google News word embedding, which has been trained on an extensive corpus of news items, and/or GloVe word …","url":["https://patentimages.storage.googleapis.com/94/fd/43/d4a3cbb7706fec/US20190266234A1.pdf"]} +{"year":"2019","title":"Neural network-based approaches for biomedical relation classification: A review","authors":["Y Zhang, H Lin, Z Yang, J Wang, Y Sun, B Xu, Z Zhao - Journal of Biomedical …, 2019"],"snippet":"… Word2vec, Google news, https://code.google.com/archive/p/word2vec. GloVe, Wikipedia, Gigaword, Common Crawl, Twitter, https://nlp.stanford. edu/projects/glove. fastText, Wikipedia, UMBC corpus, news corpus …","url":["https://www.sciencedirect.com/science/article/pii/S1532046419302138"]} +{"year":"2019","title":"Neural NLP models under low-supervision scenarios","authors":["Y Zhang - 2019"],"snippet":"Page 1. Copyright by Ye Zhang 2019 Page 2. The Dissertation Committee for Ye Zhang certifies that this is the approved version of the following dissertation: Neural NLP Models Under Low-supervision Scenarios Committee: Matthew A Lease, Supervisor …","url":["https://repositories.lib.utexas.edu/bitstream/handle/2152/75032/ZHANG-DISSERTATION-2019.pdf?sequence=1"]} +{"year":"2019","title":"Neural Text Style Transfer via Denoising and Reranking","authors":["J Lee, Z Xie, C Wang, M Drach, D Jurafsky, AY Ng - … of the Workshop on Methods for …, 2019"],"snippet":"… 3. Fluency The post-transfer sentence should remain grammatical and fluent. We use the average log probability of the sentence posttransfer with respect to a language model trained on CommonCrawl as our measure of fluency …","url":["https://www.aclweb.org/anthology/W19-2309"]} +{"year":"2019","title":"NLNDE: The Neither-Language-Nor-Domain-Experts' Way of Spanish Medical Document De-Identification","authors":["L Lange, H Adel, J Strötgen - 2019"],"snippet":"… S2 (FLAIR+fastText): In contrast to all other runs, the second run uses only domain-independent embeddings, ie, embeddings that have been trained on standard narrative and news data from Common Crawl and Wikipedia …","url":["http://ceur-ws.org/Vol-2421/MEDDOCAN_paper_5.pdf"]} +{"year":"2019","title":"NLP@ UIOWA at SemEval-2019 Task 6: Classifying the Crass using Multi-windowed CNNs","authors":["J Rusert, P Srinivasan - Proceedings of the 13th International Workshop on …, 2019"],"snippet":"… Word embeddings for Non-Out of Vocabulary (OOV) words are obtained from Glove (Pennington et al., 2014) which has been trained on Twitter data3. Experiments were also conducted with Glove common crawl data, but no visible improvement was found …","url":["https://www.aclweb.org/anthology/S19-2125"]} +{"year":"2019","title":"Noisy Parallel Corpus Filtering through Projected Word Embeddings","authors":["M Kurfalı, R Östling - Proceedings of the Fourth Conference on Machine …, 2019"],"snippet":"… Larger monolingual corpora based on Wikipedia and common crawl data were also provided.2 To train our model, we use all the parallel data available for the English-Sinhala and EnglishNepali pairs (summarized …","url":["https://www.aclweb.org/anthology/W19-5438"]} +{"year":"2019","title":"NRC Parallel Corpus Filtering System for WMT 2019","authors":["G Bernier-Colborne, C Lo - Proceedings of the Fourth Conference on Machine …, 2019"],"snippet":"… embedding models. Common Crawl data was not used to train the bilingual word embeddings. 2.2 … representation layer. We used XLM to train a model using almost all the available data, except for the monolingual English Common Crawl data. This …","url":["https://www.aclweb.org/anthology/W19-5434"]} +{"year":"2019","title":"Observing Dialogue in Therapy: Categorizing and Forecasting Behavioral Codes","authors":["J Cao, M Tanana, ZE Imel, E Poitras, DC Atkins…"],"snippet":"Page 1. Observing Dialogue in Therapy: Categorizing and Forecasting Behavioral Codes Jie Cao†, Michael Tanana‡, Zac E. Imel‡, Eric Poitras‡, David C. Atkins♦, Vivek Srikumar† †School of Computing, University of Utah …","url":["https://svivek.com/research/publications/cao2019observing.pdf"]} +{"year":"2019","title":"Observing LOD Using Equivalent Set Graphs: It Is Mostly Flat and Sparsely Linked","authors":["L Asprino, W Beek, P Ciancarini, F van Harmelen… - International Semantic Web …, 2019"],"snippet":"… The two largest available crawls of LOD available today are WebDataCommons and LOD-a-lot. WebDataCommons 2 [12] consists of \\(\\sim \\)31B triples that have been extracted from the CommonCrawl datasets (November 2018 version) …","url":["https://link.springer.com/chapter/10.1007/978-3-030-30793-6_4"]} +{"year":"2019","title":"Observing the LOD Cloud using Equivalent Set Graphs: the LOD Cloud is mostly flat and sparsely linked","authors":["L Asprino, W Beek, P Ciancarini, F van Harmelen…"],"snippet":"… The two largest available crawls of LOD available today are WebDataCommons and LOD-a-lot. WebDataCommons5 [12] consists of ∼31B triples that have been extracted from the CommonCrawl datasets (November 2018 version) …","url":["https://www.cs.vu.nl/~frankh/postscript/ISWC2019-LODanalytics.pdf"]} +{"year":"2019","title":"OECD Analytical Database on Individual Multinationals and their Affiliates (ADIMA)","authors":["G Pilgrim, N Ahmad, D Doyle - 2019"],"snippet":"… Secondly, information from MNE webpages is used from an open source 'copy of the internet' generated via web crawling from the Common Crawl 4 . This process develops a graph of the links between companies, from …","url":["https://www.gtap.agecon.purdue.edu/resources/download/9310.docx"]} +{"year":"2019","title":"Offensive Language and Hate Speech Detection for Danish","authors":["GI Sigurbergsson, L Derczynski - arXiv preprint arXiv:1908.04531, 2019"],"snippet":"… sample of text. Pre-trained Embeddings. The pre-trained FastText [24] embeddings are trained on data from the Common Crawl project and Wikipedia, in 157 languages (including English and Danish). FastText also provides …","url":["https://arxiv.org/pdf/1908.04531"]} +{"year":"2019","title":"On extracting data from tables that are encoded using HTML","authors":["JC Roldán, P Jiménez, R Corchuelo - Knowledge-Based Systems, 2019"],"snippet":"Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S095070511930509X"]} +{"year":"2019","title":"On Implementing the Binary Interpolative Coding Algorithm","authors":["GE PIBIRI - 2019"],"snippet":"… Table 4. Decoding time measured in average nanoseconds spent per decoded integer, for the run-aware implementation (ra) and for the not run-aware implementation. • CCNews is an English subset of the freely available news from CommonCrawl 3, consisting of …","url":["http://pages.di.unipi.it/pibiri/papers/BIC.pdf"]} +{"year":"2019","title":"On Measuring and Mitigating Biased Inferences of Word Embeddings","authors":["S Dev, T Li, J Phillips, V Srikumar - arXiv preprint arXiv:1908.09369, 2019"],"snippet":"Page 1. arXiv:1908.09369v1 [cs.CL] 25 Aug 2019 On Measuring and Mitigating Biased Inferences of Word Embeddings Sunipa Dev, Tao Li, Jeff Phillips, Vivek Srikumar School of Computing University of Utah Abstract Word …","url":["https://arxiv.org/pdf/1908.09369"]} +{"year":"2019","title":"On Measuring Social Biases in Sentence Encoders","authors":["C May, A Wang, S Bordia, SR Bowman, R Rudinger - arXiv preprint arXiv:1903.10561, 2019"],"snippet":"Page 1. On Measuring Social Biases in Sentence Encoders Chandler May1 Alex Wang2 Shikha Bordia2 Samuel R. Bowman2 Rachel Rudinger1 1Johns Hopkins University 2New York University {cjmay,rudinger}@jhu.edu {alexwang,sb6416,bowman}@nyu.edu Abstract …","url":["https://arxiv.org/pdf/1903.10561"]} +{"year":"2019","title":"On Optimally Partitioning Variable-Byte Codes","authors":["GE Pibiri, R Venturini - IEEE Transactions on Knowledge and Data …, 2019"],"snippet":"Page 1. 1041-4347 (c) 2018 IEEE. Personal use is permitted, but republication/ redistribution requires IEEE permission. See http://www.ieee.org/ publications_standards/publications/rights/index.html for more information. This …","url":["https://ieeexplore.ieee.org/abstract/document/8691421/"]} +{"year":"2019","title":"On relevance of enriching word embeddings in solving Natural Language Inference problem","authors":["T Wesołowski"],"snippet":"Page 1. Jagiellonian University Faculty of Mathematics and Computer Science Theoretical Computer Science Stationary Studies Index number: 1079621 Tomasz Wesołowski On relevance of enriching word embeddings in solving Natural Language Inference problem …","url":["http://algo.edu.pl/OnRelevanceOfWordEmbeddings.pdf"]} +{"year":"2019","title":"On Slicing Sorted Integer Sequences","authors":["GE Pibiri - arXiv preprint arXiv:1907.01032, 2019"],"snippet":"… 2009. • CCNews is a dataset of news freely available from CommonCrawl: http://commoncrawl.org/ 2016/10/news-dataset-available. Precisely, the datasets consists of the news appeared from 09/01/16 to 30/03/18. Identifiers …","url":["https://arxiv.org/pdf/1907.01032"]} +{"year":"2019","title":"On the Effect of Low-Frequency Terms on Neural-IR Models","authors":["S Hofstätter, N Rekabsaz, C Eickhoff, A Hanbury - arXiv preprint arXiv:1904.12683, 2019"],"snippet":"… collection. The details of the resulting 1Provided in the form of evaluation tuples: top1000.dev.tsv 242B lower-cased (CommonCrawl) from: https://nlp.stanford.edu/ projects/glove/ Table 1: Left: Details of the vocabularies. Right …","url":["https://arxiv.org/pdf/1904.12683"]} +{"year":"2019","title":"On the Robustness of Unsupervised and Semi-supervised Cross-lingual Word Embedding Learning","authors":["Y Doval, J Camacho-Collados, L Espinosa-Anke… - arXiv preprint arXiv …, 2019"],"snippet":"… google.com/site/rmyeid/projects/polyglot 2The sources of the web-corpora are: UMBC (Han et al., 2013), 1-billion (Cardellino, 2016), itWaC and sdeWaC (Ba- roni et al., 2009), Hamshahri (AleAhmad et al., 2009), and Common Crawl downloaded from http://www …","url":["https://arxiv.org/pdf/1908.07742"]} +{"year":"2019","title":"On Using Machine Learning to Identify Knowledge in API Reference Documentation","authors":["D Fucci, A Mollaalizadehbahnemiri, W Maalej - arXiv preprint arXiv:1907.09807, 2019"],"snippet":"… For the deep learning classifiers in our benchmark, we train GloVe [19] embeddings based on four large corpora, summarized in Table 3. The Common Crawl (CC) is a pre-trained embedding downloaded in …","url":["https://arxiv.org/pdf/1907.09807"]} +{"year":"2019","title":"On Using SpecAugment for End-to-End Speech Translation","authors":["P Bahar, A Zeyer, R Schlüter, H Ney"],"snippet":"… For MT training, we use the TED, and the OpenSubtitles2018 corpora, as well as the data provided by the WMT 2018 evaluation (Europarl, ParaCrawl, CommonCrawl, News Commentary, and Rapid), a total of 65M lines of parallel sentences …","url":["https://www-i6.informatik.rwth-aachen.de/publications/download/1122/Bahar-IWSLT-2019.pdf"]} +{"year":"2019","title":"One Epoch Is All You Need","authors":["A Komatsuzaki - arXiv preprint arXiv:1906.06669, 2019"],"snippet":"… Trinh & Le (2018) pointed out that CommonCrawl contains a large portion of corrupt samples, which makes it unsuitable for the training. The proportion of the corrupt samples in CommonCrawl is substantially higher than 50 …","url":["https://arxiv.org/pdf/1906.06669"]} +{"year":"2019","title":"Online Parallel Data Extraction with Neural Machine Translation","authors":["D Ruiter - 2019"],"snippet":"Page 1. Universität des Saarlandes Master's Thesis Online Parallel Data Extraction with Neural Machine Translation submitted in fulfillment of the degree requirements of the MSc in Language Science and Technology at Saarland University …","url":["https://www.clubs-project.eu/assets/publications/other/MSc_Thesis_Ruiter.pdf"]} +{"year":"2019","title":"Ontological Traceability using Natural Language Processing","authors":["R Benitez - 2019"],"snippet":"Page 1. Ontological Traceability using Natural Language Processing A master thesis presented by Edder de la Rosa Benitez Submitted to the Department of Organization and Information in partial fulfillment of the …","url":["https://dspace.library.uu.nl/bitstream/handle/1874/383214/Master_Thesis_E_De_la_Rosa.pdf?sequence=2"]} +{"year":"2019","title":"OpenCeres: When Open Information Extraction Meets the Semi-Structured Web","authors":["C Lockard, P Shiralkar, XL Dong"],"snippet":"… 5.1 Experimental Setup Datasets: Our primary dataset is the augmented SWDE corpus described in Section 4. In addition, we used the set of 315 movie websites (comprising 433,000 webpages) found in Common …","url":["http://lunadong.com/publication/openCeres_naacl.pdf"]} +{"year":"2019","title":"OPTIMIZE THE LEARNING RATE OF NEURAL ARCHITECTURE IN MYANMAR STEMMER","authors":["Y Oo, KM Soe"],"snippet":"… Word vector pre-trained on large text corpora have been released on [10] \"Learning Word Vectors for 157 Languages \" that trained on 3 billion words from Wikipedia and Common Crawl using Continuous bag-of-words (CBOW) 300-dimension …","url":["https://www.academia.edu/download/61248451/120191117-8847-1ko3nhm.pdf"]} +{"year":"2019","title":"Optimizer Comparison with Dropout for Neural Sequence Labeling in Myanmar Stemmer","authors":["O Yadanar, KM Soe - 2019 IEEE International Conference on Industry 4.0 …, 2019"],"snippet":"… Parameter initialization: It has used Learning Word Vectors for 157 Languages that trained on 3 billion words from Wikipedia and Common Crawl using CBOW 300-dimension (E. Grave, P. Bojanowski, P. Gupta, A. Joulin, T. Mikolov,2018) for both word and character …","url":["https://ieeexplore.ieee.org/abstract/document/8784850/"]} +{"year":"2019","title":"Optimizing Social Media Data Using Genetic Algorithm","authors":["S Das, AK Kolya, D Das - Metaheuristic Approaches to Portfolio Optimization, 2019"],"snippet":"Page 1. 126 Copyright © 2019, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. Chapter 6 DOI: 10.4018/978-1-5225-8103-1.ch006 ABSTRACT Twitter-based …","url":["https://www.igi-global.com/chapter/optimizing-social-media-data-using-genetic-algorithm/233176"]} +{"year":"2019","title":"Overview of the CLEF eHealth Evaluation Lab 2019","authors":["E Kanoulas, D Li, L Azzopardi, R Spijker, G Zuccon… - Experimental IR Meets …"],"snippet":"… More specifically, for the Abstract and Title Screening subtask the PubMed Document Identifiers (PMIDs) of potentially relevant 4http://commoncrawl. org/(last accessed on 28 May 2019) … It consists of web pages acquired from the CommonCrawl …","url":["http://books.google.de/books?hl=en&lr=lang_en&id=LqGsDwAAQBAJ&oi=fnd&pg=PA322&dq=commoncrawl&ots=8duC39Wv1R&sig=Tt9HYmgrR17eWjcTJPZvsij9B5g"]} +{"year":"2019","title":"P-SIF: Document Embeddings Using Partition Averaging","authors":["V Gupta, A Saw, P Nokhiz, P Netrapalli, P Rai…"],"snippet":"… Page 5. evaluation. We use the PARAGRAM-SL999 (PSL) as word embeddings, obtained by training on the PPDB dataset. 7 We use the fixed weighting parameter a value of 10−3, and the word frequencies p(w) are estimated from the commoncrawl dataset …","url":["https://vgupta123.github.io/docs/AAAI-GuptaV.3656.pdf"]} +{"year":"2019","title":"P2L: Predicting Transfer Learning for Images and Semantic Relations","authors":["B Bhattacharjee, N Codella, JR Kender, S Huo… - arXiv preprint arXiv …, 2019"],"snippet":"… We use the CC-DBP [12] dataset: the text of Common Crawl1 and the semantic relations schema and training data from DBpedia [1]. DBpedia is a knowledge graph extracted from the infoboxes from Wikipedia … 4.3.2 Validation on Common Crawl - DBpedia …","url":["https://arxiv.org/pdf/1908.07630"]} +{"year":"2019","title":"PaDAWaNS","authors":["TLM Brands"],"snippet":"Page 1. PaDAWaNS Proactive Domain Abuse Warning and Notification System by TLM Brands to obtain the degree of Master of Science at the Delft University of Technology, to be defended publicly on Tuesday January 15, 2019 at 11:00 AM …","url":["https://www.sidnlabs.nl/downloads/theses/thesis_brands_padawans.pdf"]} +{"year":"2019","title":"Parallel External Memory Wavelet Tree and Wavelet Matrix Construction","authors":["J Ellert, F Kurpicz - International Symposium on String Processing and …, 2019"],"snippet":"… CC \\((\\sigma =242)\\) contains websites (without HTML tags) that have been crawled by the Common Crawl corpus (http://commoncrawl.org), and. Wiki \\((\\sigma =213)\\) are recent Wikipedia dumps containing XML files that …","url":["https://link.springer.com/chapter/10.1007/978-3-030-32686-9_28"]} +{"year":"2019","title":"Paraphrase-Sense-Tagged Sentences","authors":["A Cocos, C Callison-Burch, S Chen, D Khashabi… - Transactions, 2019"],"snippet":"Skip to main content …","url":["http://callison-burch.github.io/publications.html"]} +{"year":"2019","title":"PDRCNN: Precise Phishing Detection with Recurrent Convolutional Neural Networks","authors":["W Wang, F Zhang, X Luo, S Zhang - Security and Communication Networks, 2019"],"snippet":"… This method first encodes the URL string using the one-hot encoding method, and then inputs each encoded character vector into the LSTM neurons for training and testing. The method achieved an accuracy of 0.935 on the …","url":["http://downloads.hindawi.com/journals/scn/2019/2595794.pdf"]} +{"year":"2019","title":"Peer Review and the Production of Scholarly Knowledge: Automated Textual Analysis of Manuscripts Revised for Publication in Administrative Science Quarterly","authors":["D Strang, F Dokshin - The Production of Managerial Knowledge and …, 2019"],"snippet":"… numbers, and filter out “stop words.” Stop words are the most common words in the English language (eg, “the,” “not,” “a”). 2 Next, for each word in the pre-processed sentences, we generate word vectors from a GloVe model …","url":["https://www.emeraldinsight.com/doi/abs/10.1108/S0733-558X20190000059006"]} +{"year":"2019","title":"PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization","authors":["J Zhang, Y Zhao, M Saleh, PJ Liu - arXiv preprint arXiv:1912.08777, 2019"],"snippet":"… T5 (Raffel et al., 2019) generalized the text-to- text framework to a variety of NLP tasks and showed the advantage of scaling up model size (to 11 billion parameters) and pre-training corpus, introducing C4, a massive text corpus …","url":["https://arxiv.org/pdf/1912.08777"]} +{"year":"2019","title":"People represent mental states in terms of rationality, social impact, and valence: Validating the 3d Mind Model","authors":["MA Thornton, D Tamir"],"snippet":"Page 1. Running head: MENTAL STATE DIMENSIONS 1 People represent mental states in terms of rationality, social impact, and valence: Validating the 3d Mind Model Mark A. Thornton* and Diana I. Tamir Department of …","url":["https://psyarxiv.com/akhpq/download?format=pdf"]} +{"year":"2019","title":"PhishFry–A Proactive Approach to Classify Phishing Sites using SCIKIT Learn","authors":["D Brites, M Wei"],"snippet":"… [Online]. Available: http://5000best.com/websites/. [Accessed 2019]. [26] OpenPhish, \"OpenPhish,\" 2019. [Online]. Available: https://openphish.com/. [27] Amazon Web Services, \"Common Crawl,\" Amazon, 2019. [Online] …","url":["https://www.shsu.edu/mxw032/publication/19gc-bw.pdf"]} +{"year":"2019","title":"Phishing Detection Based on Machine Learning and Feature Selection Methods","authors":["M Almseidin, AMA Zuraiq, M Al-kasassbeh, N Alnidami - International Journal of …, 2019"],"snippet":"… Phishing webpages are collected from Phish-Tank and Open-Phish, while legitimate web-pages are collected from Alexa and Common Crawl. These web-pages are downloaded on two distinct sessions, from January to May 2015 and through May to June 2017 …","url":["https://onlinejour.journals.publicknowledgeproject.org/index.php/i-jim/article/download/11411/6259"]} +{"year":"2019","title":"Phishing URL Detection Via Capsule-Based Neural Network","authors":["Y Huang, J Qin, W Wen - 2019 IEEE 13th International Conference on Anti …, 2019"],"snippet":"… [27] VirusTotal, https://www.virustotal.com/ [28] Common Crawl, https://commoncrawl.org/ [29] J. Ma, LK Saul, S. Savage, and GM VoelNer, “Beyond blacNlists: learning to detect malicious web sites from suspicious …","url":["https://ieeexplore.ieee.org/abstract/document/8925000/"]} +{"year":"2019","title":"Pivot-based Transfer Learning for Neural Machine Translation between Non-English Languages","authors":["Y Kim, P Petrov, P Petrushkov, S Khadivi, H Ney - arXiv preprint arXiv:1909.09524, 2019"],"snippet":"Page 1. Pivot-based Transfer Learning for Neural Machine Translation between Non-English Languages Yunsu Kim1∗ Petre Petrov1,2∗ Pavel Petrushkov2 Shahram Khadivi2 Hermann Ney1 1RWTH Aachen University, Aachen …","url":["https://arxiv.org/pdf/1909.09524"]} +{"year":"2019","title":"PKUSE at SemEval-2019 Task 3: Emotion Detection with Emotion-Oriented Neural Attention Network","authors":["L Ma, L Zhang, W Ye, W Hu - Proceedings of the 13th International Workshop on …, 2019"],"snippet":"… Table 1: Datasets for Semeval-2019 Task 3. 4.2 Experiments The model is implemented using Keras 2.0 (Chollet et al., 2017). We experiment with Stanford's GloVe 300 dimensional word embeddings trained on 840 billion words from Common Crawl …","url":["https://www.aclweb.org/anthology/S19-2049"]} +{"year":"2019","title":"PLAGO: A SYSTEM FOR PLAGIARISM DETECTION AND INTERVENTION IN MASSIVE COURSES","authors":["CT Guida - 2019"],"snippet":"… Web Crawl: Used for queuing and monitoring of importing web pages from the CommonCrawl.org public dataset (described in 3.5.2). • Admin Options … pages. Common Crawl is a non-profit organization which offers a public …","url":["https://smartech.gatech.edu/bitstream/handle/1853/61787/GUIDA-THESIS-2019.pdf?sequence=1&isAllowed=y"]} +{"year":"2019","title":"Poetry: Identification, Entity Recognition, and Retrieval","authors":["IV Foley, J John - 2019"],"snippet":"Page 1. University of Massachusetts Amherst ScholarWorks@UMass Amherst Doctoral Dissertations Dissertations and Theses 2019 Poetry: Identification, Entity Recognition, and Retrieval John J. Foley IV Follow this and additional …","url":["https://scholarworks.umass.edu/cgi/viewcontent.cgi?article=2628&context=dissertations_2"]} +{"year":"2019","title":"Populating Web Scale Knowledge Graphs using Distantly Supervised Relation Extraction and Validation","authors":["A Gliozzo, MR Glass, S Dash, M Canim - arXiv preprint arXiv:1908.08104, 2019"],"snippet":"… Also, a web-scale experiment conducted to extend DBPedia with knowledge from Common Crawl shows that our system is not only scalable but also does not require any adaptation cost, while yielding substantial accuracy gain. 1 Introduction …","url":["https://arxiv.org/pdf/1908.08104"]} +{"year":"2019","title":"Precise Detection of Content Reuse in the Web","authors":["C Ardi, J Heidemann - ACM SIGCOMM Computer Communication Review, 2019"],"snippet":"… We verify our algorithm and its choices with controlled experiments over three web datasets: Common Crawl (2009/10), GeoCities (1990s–2000s), and a phishing corpus (2014) … In the Common Crawl dataset of 40.5×109 chunks, we set the threshold to 105 …","url":["https://dl.acm.org/citation.cfm?id=3336940"]} +{"year":"2019","title":"Predicting ConceptNet Path Quality Using Crowdsourced Assessments of Naturalness","authors":["Y Zhou, S Schockaert, JA Shah - arXiv preprint arXiv:1902.07831, 2019"],"snippet":"… The number in parenthesis after each feature name indicates the dimension of that feature. Vertex embedding (300) This feature is taken directly from the 300dimensional GloVe (25) embedding, pre-trained on the Common Crawl2 dataset with 840 billion tokens …","url":["https://arxiv.org/pdf/1902.07831"]} +{"year":"2019","title":"Predicting Word Concreteness and Imagery","authors":["J Charbonnier, C Wartena - Proceedings of the 13th International Conference on …, 2019"],"snippet":"… The other two version (also available with and without subword information) with 2 million word vectors trained on the Common Crawl with 600B tokens. In our experiments we used the version trained on Common Crawl without …","url":["https://www.aclweb.org/anthology/W19-0415"]} +{"year":"2019","title":"Probing Contextualized Sentence Representations with Visual Awareness","authors":["Z Zhang, R Wang, K Chen, M Utiyama, E Sumita… - arXiv preprint arXiv …, 2019"],"snippet":"… We used newsdev2016 as the dev set and newstest2016 as the test set. 2) For the EN-DE translation task, 4.43M bilingual sentence pairs of the WMT14 dataset were used as training data, including Common Crawl, News Commentary, and Europarl v7 …","url":["https://arxiv.org/pdf/1911.02971"]} +{"year":"2019","title":"Product Classification Using Microdata Annotations","authors":["Z Zhang, M Paramita - International Semantic Web Conference, 2019"],"snippet":"… dimension of the continuous vector representation of each word. In this work, we use the GloVe word embedding vectors pre-trained on the Common Crawl corpus 3 with 300 dimensions. Since we are dealing with content from e …","url":["https://link.springer.com/chapter/10.1007/978-3-030-30793-6_41"]} +{"year":"2019","title":"Provision and Usage of Provenance Data in the WebIsALOD Knowledge Graph","authors":["S Hertling, H Paulheim - CEUR Workshop Proceedings, 2018"],"snippet":"… As described in [6], the Copyright c 2018 for this paper by its authors. Copying permitted for private and academic purposes. 1 https://commoncrawl.org 2 NP stands for noun phrase. 3 https://www.w3.org/TR/skos-reference/ Page 2. isa:concept/_Gmail …","url":["http://ceur-ws.org/Vol-2317/article-06.pdf"]} +{"year":"2019","title":"PT-CoDE: Pre-trained Context-Dependent Encoder for Utterance-level Emotion Recognition","authors":["W Jiao, MR Lyu, I King - arXiv preprint arXiv:1910.08916, 2019"],"snippet":"… Here, we utilize the 300-dimensional pre-trained GloVe word vectors1 (Pennington et al., 2014) trained over 840B Common Crawl to initialize the word embedding layer. Those words that cannot be found in the GloVe …","url":["https://arxiv.org/pdf/1910.08916"]} +{"year":"2019","title":"QE BERT: Bilingual BERT using Multi-task Learning for Neural Quality Estimation","authors":["H Kim, JH Lim, HK Kim, SH Na - Proceedings of the Fourth Conference on Machine …, 2019"],"snippet":"… We used parallel data provided for the WMT19 news machine translation task6 to pre-train QE BERT. The English-Russian parallel data set consisted of the ParaCrawl corpus, Common Crawl corpus, News Commentary corpus, and Yandex …","url":["https://www.aclweb.org/anthology/W19-5407"]} +{"year":"2019","title":"QED: A Fact Verification and Evidence Support System","authors":["J Luken - 2019"],"snippet":"… embedding layers, as described below. 4.3.2 Embedding We use GloVe word embeddings (Pennington et al., 2014) with 300 dimensions pretrained using CommonCrawl to get a vector representation of the evidence sentence. We","url":["https://etd.ohiolink.edu/!etd.send_file?accession=osu1555074124008897&disposition=inline"]} +{"year":"2019","title":"Quantifying the Semantic Core of Gender Systems","authors":["DBPH Wallach"],"snippet":"… 4The FASTTEXT word embeddings were trained using Common Crawl and Wikipedia data, using CBOW with po- sition weights, with character n-grams of length 5. For more information, see http://fasttext.cc/docs/en …","url":["https://openreview.net/pdf?id=ByxcApoPwS"]} +{"year":"2019","title":"QuAVONet: Answering Questions on the SQuAD Dataset with QANet and Answer Verifier","authors":["J Cervantes"],"snippet":"… 5.2 Implementation Details For the word embeddings, I used the starter code's 300-dimensional GloVE vectors trained on the CommonCrawl dataset [6]. These embeddings remained unchanged and were not trained for any of my models …","url":["https://pdfs.semanticscholar.org/f71e/5c6cdd9e06068625eb82b3d9647823e80503.pdf"]} +{"year":"2019","title":"Quick and (maybe not so) Easy Detection of Anorexia in Social Media Posts","authors":["E Mohammadi, H Amini, L Kosseim - 2019"],"snippet":"… As shown in Figure 1, these token vectors are then fed to the hidden layer. Two different pretrained word embeddings were experimented with. The first word embedder was the 300d version of GloVe [26] that was pretrained …","url":["https://www.researchgate.net/profile/Hessam_Amini/publication/334848955_Quick_and_maybe_not_so_Easy_Detection_of_Anorexia_in_Social_Media_Posts/links/5d434b9992851cd04699c9ce/Quick-and-maybe-not-so-Easy-Detection-of-Anorexia-in-Social-Media-Posts.pdf"]} +{"year":"2019","title":"Quotient Hash Tables-Efficiently Detecting Duplicates in Streaming Data","authors":["R Géraud, M Lombard-Platet, D Naccache - arXiv preprint arXiv:1901.04358, 2019"],"snippet":"Page 1. arXiv:1901.04358v1 [cs.DS] 14 Jan 2019 Quotient Hash Tables - Efficiently Detecting Duplicates in Streaming Data Rémi Gérauda,c, Marius Lombard-Platet∗ a,b, and David Naccachea,c aDépartement d'informatique …","url":["https://arxiv.org/pdf/1901.04358"]} +{"year":"2019","title":"Racial bias in legal language","authors":["D Rice, JH Rhodes, T Nteta - Research & Politics, 2019"],"snippet":"Although racial bias in the law is widely recognized, it remains unclear how these biases are in entrenched in the language of the law, judicial opinions. In th...","url":["https://journals.sagepub.com/doi/pdf/10.1177/2053168019848930"]} +{"year":"2019","title":"Random Projection in Deep Neural Networks","authors":["PI Wójcik - arXiv preprint arXiv:1812.09489, 2018"],"snippet":"Page 1. Akademia Górniczo-Hutnicza im. Stanisława Staszica w Krakowie Wydział Informatyki, Elektroniki i Telekomunikacji Katedra Informatyki Rozprawa doktorska Zastosowania metody rzutu przypadkowego w głębokich …","url":["https://arxiv.org/pdf/1812.09489"]} +{"year":"2019","title":"Real or Fake? Learning to Discriminate Machine from Human Generated Text","authors":["A Bakhtin, S Gross, M Ott, Y Deng, MA Ranzato… - arXiv preprint arXiv …, 2019"],"snippet":"… CCNews: We collect a de-duplicated subset of the English portion of the CommonCrawl news dataset [Nagel, 2016], which totals around 16 Billion words … Sebastian Nagel. Cc-news. http://web.archive.org/save/http …","url":["https://arxiv.org/pdf/1906.03351"]} +{"year":"2019","title":"Real-time Claim Detection from News Articles and Retrieval of Semantically-Similar Factchecks","authors":["B Adler, G Boscaini-Gilroy - arXiv preprint arXiv:1907.02030, 2019"],"snippet":"… new problem. Many unsupervised text embeddings are trained on the CommonCrawl 1 dataset of approx. 840 billion tokens. This … dataset. Supervised datasets are 1CommonCrawl found at http://commoncrawl.org/ unlikely ever …","url":["https://arxiv.org/pdf/1907.02030"]} +{"year":"2019","title":"Real-time event detection using recurrent neural network in social sensors","authors":["VQ Nguyen, TN Anh, HJ Yang - International Journal of Distributed Sensor Networks, 2019"],"snippet":"We proposed an approach for temporal event detection using deep learning and multi-embedding on a set of text data from social media. First, a convolutional neural network augmented with multiple w...","url":["https://journals.sagepub.com/doi/pdf/10.1177/1550147719856492"]} +{"year":"2019","title":"Real-world Conversational AI for Hotel Bookings","authors":["B Li, N Jiang, J Sham, H Shi, H Fazal - arXiv preprint arXiv:1908.10001, 2019"],"snippet":"… We compare the following models: 1) Averaged GloVe + feedforward: We use 100dimensional, trainable GloVe embeddings [17] trained on Common Crawl, and produce sentence embeddings for each of the two inputs by averaging across all tokens …","url":["https://arxiv.org/pdf/1908.10001"]} +{"year":"2019","title":"Recommendation System with Aspect-Based Sentiment Analysis","authors":["Q Du, D Zhu, W Duan"],"snippet":"… The word vectors model we use is the \"en_core_web_lg\" model in spaCy. The model contains English multi-task CNN trained on OntoNotes 5[3], with GloVe[8] vectors trained on Common Crawl. It provides 300dimensional …","url":["http://rafaelsilva.com/wp-content/uploads/2018/12/014-Aspect-based-sentiment-analysis.pdf"]} +{"year":"2019","title":"Refining Word Reesprentations by Manifold Learning","authors":["C Yonghe, H Lin, L Yang, Y Diao, S Zhang, F Xiaochao"],"snippet":"… judgment. This is exemplified by the WS353[Finkelstein et al., 2001]word similarity ground truth in Figure 1. Based on the Common Crawl corpus (42B), the Glove model is used to train 300-dimensional word vectors. The similarity …","url":["https://www.ijcai.org/proceedings/2019/0749.pdf"]} +{"year":"2019","title":"Regressing Word and Sentence Embeddings for Regularization of Neural Machine Translation","authors":["IJ Unanue, EZ Borzeshi, M Piccardi - arXiv preprint arXiv:1909.13466, 2019"],"snippet":"… De-En: The German-English dataset (de-en) has been taken from the WMT18 news translation shared task1. The training set contains over 5M sentence pairs collected from the Europarl, CommonCrawl and Newscommentary parallel corpora …","url":["https://arxiv.org/pdf/1909.13466"]} +{"year":"2019","title":"Rel4KC: A Reinforcement Learning Agent for Knowledge Graph Completion and Validation","authors":["X Lin, P Subasic, H Yin - 2019"],"snippet":"… The fact triples extracted from free text (Common Crawl) are then fed to the trained RL agent to determine their trustworthiness. If passing the validation, a triple is entered into target KG … The free text used in this study is Common Crawl corpus …","url":["http://www.cse.msu.edu/~zhaoxi35/DRL4KDD/1.pdf"]} +{"year":"2019","title":"Repositioning privacy concerns: Web servers controlling URL metadata","authors":["R Ferreira, RL Aguiar - Journal of Information Security and Applications, 2019"],"snippet":"… on empirical observation of web browsers and HTTP server implementations, and while some implementations allow longer URLs (eg, 100.000 octets) this value remains a reasonable assumption for practical purposes 1 . Our …","url":["https://www.sciencedirect.com/science/article/pii/S2214212618302588"]} +{"year":"2019","title":"Representation Learning for Question Classification via Topic Sparse Autoencoder and Entity Embedding","authors":["D Li, J Zhang, P Li - IEEE Big Data, 2018"],"snippet":"… WordNet 2. The embeddings of entity-related information are also trained with skip-gram. The word embeddings are initialized with the 300 dimensional pretrained vectors 3 from the Common Crawl of 840 billion tokens and 2.2 …","url":["http://research.baidu.com/Public/uploads/5c1c9ab3069f4.pdf"]} +{"year":"2019","title":"Representing Overlaps in Sequence Labeling Tasks with a Novel Tagging Scheme: bigappy-unicrossy","authors":["G Berk, B Erden, T Güngör"],"snippet":"… a language-independent system based on the bidirectional LSTM-CRF model provided by [7]. Similar to Deep-BGT system [2], we make use of the pretrained word embeddings provided by fastText [6]. The word embeddings …","url":["https://www.cmpe.boun.edu.tr/~gungort/papers/Representing%20Overlaps%20in%20Sequence%20Labeling%20Tasks%20with%20a%20Novel%20Tagging%20Scheme%20-%20bigappy-unicrossy.pdf"]} +{"year":"2019","title":"Review and Visualization of Facebook's FastText Pretrained Word Vector Model","authors":["JC Young, A Rusli - … International Conference on Engineering, Science, and …, 2019"],"snippet":"… Machine Learning (ML). Currently, FastText provides pretrained Word2Vec model for 157 language that trained on Common Crawl and Wikipedia (Bahasa Indonesia is one from the provided model) [15]. In its Word2Vec model …","url":["https://ieeexplore.ieee.org/abstract/document/8863015/"]} +{"year":"2019","title":"RIPPED: Recursive Intent Propagation using Pretrained Embedding Distances","authors":["M Ball - 2019"],"snippet":"… GloVe (Pennington et al., 2014) is a word embedding model trained on data from the Common Crawl corpus6. GloVe is a log-bilinear regression model that incorporates both local context windows and global matrix …","url":["https://cs.brown.edu/research/pubs/theses/ugrad/2019/ball.michael.pdf"]} +{"year":"2019","title":"RNN Embeddings for Identifying Difficult to Understand Medical Words","authors":["H Pylieva, A Chernodub, N Grabar, T Hamon - … of the 18th BioNLP Workshop and …, 2019"],"snippet":"… improve classification accuracy for our specific problem. We note that FastText word embeddings trained on Wikipedia and Common Crawl5 texts have an important part of words from our dataset. According to our analysis, the …","url":["https://www.aclweb.org/anthology/W19-5011"]} +{"year":"2019","title":"RoBERTa: A Robustly Optimized BERT Pretraining Approach","authors":["Y Liu, M Ott, N Goyal, J Du, M Joshi, D Chen, O Levy… - arXiv preprint arXiv …, 2019"],"snippet":"… (16GB). • CC-NEWS, which we collected from the En- glish portion of the CommonCrawl News dataset (Nagel, 2016) … STORIES, a dataset introduced in Trinh and Le (2018) containing a subset of CommonCrawl data filtered …","url":["https://arxiv.org/pdf/1907.11692"]} +{"year":"2019","title":"Robust Argument Unit Recognition and Classification","authors":["D Trautmann, J Daxenberger, C Stab, H Schütze… - arXiv preprint arXiv …, 2019"],"snippet":"… 2http://commoncrawl.org/2016/02/ february-2016-crawl-archive-now-available/ 3https://www.elastic.co/products/ elasticsearch the topic. Each document was checked for its corresponding WARC file at the Common Crawl In …","url":["https://arxiv.org/pdf/1904.09688"]} +{"year":"2019","title":"Robust Named Entity Recognition with Truecasing Pretraining","authors":["S Mayhew, N Gupta, D Roth - arXiv preprint arXiv:1912.07095, 2019"],"snippet":"… and Kauchak (2011) and used in Susanto, Chieu, and Lu (2016), and a specially preprocessed large dataset from English Common Crawl (CC).1 … 1commoncrawl.org 2In a naming clash, the moses script is called …","url":["https://arxiv.org/pdf/1912.07095"]} +{"year":"2019","title":"SACABench: Benchmarking Suffix Array Construction","authors":["J Bahne, N Bertram, M Böcker, J Bode, J Fischer… - International Symposium on …, 2019"],"snippet":"… We removed every character but A, C, G, and T. CommonCrawl (\\(\\sigma =242,\\mathrm {avg\\_lcp}=3,995, \\mathrm {max\\_lcp}=605,632\\)), which is a crawl of the web done by the CommonCrawl Corpus (http://commoncrawl.org) without any HTML tags …","url":["https://link.springer.com/chapter/10.1007/978-3-030-32686-9_29"]} +{"year":"2019","title":"Samsung and University of Edinburgh's System for the IWSLT 2019","authors":["J Wetesko, M Chochowski, P Przybysz, P Williams… - 2019"],"snippet":"… CommonCrawl and NewsCrawl corpora we used the approach de- scribed in [5]. Two RNN language models were constructed using Marian toolkit: in-domain trained with MUST-C corpus and out-of-domain created using …","url":["https://www.zora.uzh.ch/id/eprint/176328/1/IWSLT2019_paper_34.pdf"]} +{"year":"2019","title":"Satellite System Graph: Towards the Efficiency Up-Boundary of Graph-Based Approximate Nearest Neighbor Search","authors":["C Fu, C Wang, D Cai - arXiv preprint arXiv:1907.06146, 2019"],"snippet":"Page 1. Satellite System Graph: Towards the Efficiency Up-Boundary of Graph-Based Approximate Nearest Neighbor Search Cong Fu, Changxu Wang, Deng Cai ∗ The State Key Lab of CAD&CG, College of Computer Science …","url":["https://arxiv.org/pdf/1907.06146"]} +{"year":"2019","title":"SberQuAD--Russian Reading Comprehension Dataset: Description and Analysis","authors":["P Efimov, L Boytsov, P Braslavski - arXiv preprint arXiv:1912.09723, 2019"],"snippet":"… We tokenized text using spaCy16. To initialize the embedding layer for BiDAF, DocQA, DrQA, and R-Net we use Russian case-sensitive fastText embeddings trained on Common Crawl and Wikipedia17. This initialization is used for both questions and paragraphs …","url":["https://arxiv.org/pdf/1912.09723"]} +{"year":"2019","title":"SC-UPB at the VarDial 2019 Evaluation Campaign: Moldavian vs. Romanian Cross-Dialect Topic Identification","authors":["C Onose, DC Cercel, S Trausan-Matu - Proceedings of the Sixth Workshop on NLP …, 2019"],"snippet":"… (2018), Nordic Language Processing Laboratory (NLPL) word embedding repository (Kutuzov et al., 2017) and Common Crawl (CC) word vectors (Grave et al., 2018). The relevant details for each word vector representation model can be viewed in Table 2 …","url":["https://www.aclweb.org/anthology/W19-1418"]} +{"year":"2019","title":"Scalable Cross-Lingual Transfer of Neural Sentence Embeddings","authors":["H Aldarmaki, M Diab - arXiv preprint arXiv:1904.05542, 2019"],"snippet":"… We used WMT'12 Common Crawl data for crosslingual alignment, and WMT'12 test sets for evaluations. We used the augmented SNLI data de- scribed in (Dasgupta et al., 2018) and their translations for training the mono-lingual and joint InferSent models …","url":["https://arxiv.org/pdf/1904.05542"]} +{"year":"2019","title":"SECNLP: A Survey of Embeddings in Clinical Natural Language Processing","authors":["K KS, S Sangeetha - arXiv preprint arXiv:1903.01039, 2019","KK Subramanyam, S Sivanesan - Journal of Biomedical Informatics, 2019"],"snippet":"Skip to main content Skip to article …","url":["https://arxiv.org/pdf/1903.01039","https://www.sciencedirect.com/science/article/pii/S1532046419302436"]} +{"year":"2019","title":"Security In Plain TXT","authors":["A Portier, H Carter, C Lever"],"snippet":"… These seed domains are compiled from a combination of sources, including the Alexa top 1 million, the TLD zone files for COM, NAME, NET, ORG, and BIZ, sites captured by the Common Crawl project, multiple public domain …","url":["http://www.henrycarter.org/papers/plaintxt19.pdf"]} +{"year":"2019","title":"Security Posture Based Incident Forecasting","authors":["D Mulugeta - 2019"],"snippet":"Page 1. Page 2. Page 3. Security Posture Based Incident Forecasting A Thesis Submitted to the Faculty of Drexel University by Dagmawi Mulugeta in partial fulfillment of the requirements for the degree of Master of Science June 2019 Page 4 …","url":["http://search.proquest.com/openview/a6f070655e6045b93b595adc3b0965ae/1?pq-origsite=gscholar&cbl=18750&diss=y"]} +{"year":"2019","title":"See-Through-Text Grouping for Referring Image Segmentation","authors":["DJ Chen, S Jia, YC Lo, HT Chen, TL Liu - … of the IEEE International Conference on …, 2019"],"snippet":"… The representation st is visual-attended and its goodness is linked to the predicted segmentation map Pt−1. The GloVe model in our implementation is pre-trained on Common Crawl in 840B tokens. Following …","url":["http://openaccess.thecvf.com/content_ICCV_2019/papers/Chen_See-Through-Text_Grouping_for_Referring_Image_Segmentation_ICCV_2019_paper.pdf"]} +{"year":"2019","title":"Semantic Characteristics of Schizophrenic Speech","authors":["K Bar, V Zilberstein, I Ziv, H Baram, N Dershowitz… - arXiv preprint arXiv …, 2019"],"snippet":"… Specifically, we used Hebrew pretrained vectors provided by fastText (Grave et al., 2018), which were created from Wikipedia,3 as well as from other content extracted from the web with Common Crawl.4 Overall, 97% of the words in our corpus exist in fastText …","url":["https://arxiv.org/pdf/1904.07953"]} +{"year":"2019","title":"Semantic similarity measure for Thai language","authors":["P Wongchaisuwat"],"snippet":"… In this paper, pre-trained word vectors from fastText [10] and Thai2vec [1] corpus are used to compute the similarity between given words. The facebook research distributed the word vector trained on a common crawl and Wikipedia using the fastText model …","url":["https://saki.siit.tu.ac.th/isai-nlp2018/uploads_final/5__a25c56af02784c266f98ef0378499ff1/iSAI-NLP2018_0005_final.pdf"]} +{"year":"2019","title":"Semantic Textual Similarity Measures for Case-Based Retrieval of Argument Graphs","authors":["M Lenz, S Ollinger, P Sahitaj, R Bergmann - International Conference on Case-Based …, 2019"],"snippet":"… Word2vec GoogleNews 3 vectors are trained on the Google News dataset on about 100B tokens. GloVe 4 is trained on the Common Crawl dataset on 840B tokens. fastText 5 vectors are trained on Wikipedia and Common Crawl …","url":["https://link.springer.com/chapter/10.1007/978-3-030-29249-2_15"]} +{"year":"2019","title":"Semi-supervised machine learning with word embedding for classification in price statistics","authors":["H Martindale, E Rowland, T Flower - 16th Meeting of the Ottawa Group on Price …, 2019"],"snippet":"Page 1. Office for National Statistics 1 Semi-supervised machine learning with word embedding for classification: April 2019 26/04/2019 Semi-supervised machine learning with word embedding for classification in price statistics …","url":["https://eventos.fgv.br/sites/eventos.fgv.br/files/arquivos/u161/semi-supervised_ml_for_price_stats-ottawa_group.pdf"]} +{"year":"2019","title":"Semi-supervised Neural Machine Translation via Marginal Distribution Estimation","authors":["Y Wang, Y Xia, L Zhao, J Bian, T Qin, E Chen, TY Liu - IEEE/ACM Transactions on …, 2019"],"snippet":"Page 1. 2329-9290 (c) 2019 IEEE. Personal use is permitted, but republication/ redistribution requires IEEE permission. See http://www.ieee.org/ publications_standards/publications/rights/index.html for more information. This …","url":["https://ieeexplore.ieee.org/abstract/document/8732422/"]} +{"year":"2019","title":"SENPAI: Supporting Exploratory Text Analysis through Semantic & Syntactic Pattern Inspection","authors":["M Samory, T Mitra - 2019"],"snippet":"… lemmatization, so as to remove surface form variations which do not alter the meaning of a word, eg the lemma for both “moved” and “moves” is “move.” Then, we encode lemmas with the corresponding 300-dimensional word …","url":["http://people.cs.vt.edu/tmitra/public/papers/icwsm19-SENPAI.pdf"]} +{"year":"2019","title":"Sense disambiguation for Punjabi language using supervised machine learning techniques","authors":["VP Singh, P Kumar - Sādhanā, 2019"],"snippet":"… The character n-grams of length 5 have been applied to words in window of size 5 with 10 negative samples [10]. It has been trained on the Punjabi Wikipedia and the raw web data fetched by common crawl method. 6 Working of WSD System for Punjabi language …","url":["https://link.springer.com/article/10.1007/s12046-019-1206-x"]} +{"year":"2019","title":"Sentence and Word Weighting for Neural Machine Translation Domain Adaptation","authors":["PP Chen"],"snippet":"Page 1. Sentence and Word Weighting for Neural Machine Translation Domain Adaptation Pinzhen (Patrick) Chen Undergraduate Dissertation Artificial Intelligence and Software Engineering School of Informatics The …","url":["https://project-archive.inf.ed.ac.uk/ug4/20191530/ug4_proj.pdf"]} +{"year":"2019","title":"Sentence Classification and Information Retrieval for Petroleum Engineering","authors":["TF Ferraz, GABA Ferreira, FG Cozman, I Santos"],"snippet":"… Accordingly, we used a word embedding representation in order to represent the words as vectors and then be able to define and compute distances between terms. We used a pre-trained embedding model called ”Common Crawl” [Pennington et al. 2014] …","url":["http://www.bracis2019.ufba.br/Camera_Ready/199118_1.pdf"]} +{"year":"2019","title":"Sentence Mover's Similarity: Automatic Evaluation for Multi-Sentence Texts","authors":["E Clark, A Celikyilmaz, NA Smith"],"snippet":"… We obtain GloVe embeddings, which are type-based, 300-dimensional embeddings trained on Common Crawl,9 using spaCy,10 while the ELMo em- beddings are character-based, 1,024-dimensional, contextual …","url":["https://homes.cs.washington.edu/~nasmith/papers/clark+celikyilmaz+smith.acl19.pdf"]} +{"year":"2019","title":"Sentence-Level Content Planning and Style Specification for Neural Text Generation","authors":["X Hua, L Wang - arXiv preprint arXiv:1909.00734, 2019"],"snippet":"… Statistics are shown in Table 1. Input Keyphrases and Label Construction. To obtain the input keyphrase candidates and their sentence-level selection labels, we first construct queries to retrieve passages from Wikipedia and news articles collected from commoncrawl …","url":["https://arxiv.org/pdf/1909.00734"]} +{"year":"2019","title":"Sentiment Analysis","authors":["D Sarkar - Text Analytics with Python, 2019"],"snippet":"In this chapter, we cover one of the most interesting and widely used aspects pertaining to natural language processing (NLP), text analytics, and machine learning. The problem at hand is sentiment...","url":["https://link.springer.com/chapter/10.1007/978-1-4842-4354-1_9"]} +{"year":"2019","title":"Separate Chaining Meets Compact Hashing","authors":["D Köppl - arXiv preprint arXiv:1905.00163, 2019"],"snippet":"Page 1. Separate Chaining Meets Compact Hashing Dominik Köppl Department of Informatics, Kyushu University, Japan Society for Promotion of Science Abstract While separate chaining is a common strategy for resolving …","url":["https://arxiv.org/pdf/1905.00163"]} +{"year":"2019","title":"Sequence Labeling to Detect Stuttering Events in Read Speech","authors":["S Alharbi, M Hasan, AJH Simons, S Brumfitt, P Green - Computer Speech & …, 2019"],"snippet":"… In the present study, we used a pre-trained GloVe model to generate word embeddings for each utterance. This model was trained on the Common Crawl (CC) corpus (1.9 M vocab) Pennington et al. (2014). 6. Automatic Speech Recognition System …","url":["https://www.sciencedirect.com/science/article/pii/S0885230819302967"]} +{"year":"2019","title":"Sequence Time Expression Recognition in the Spanish Clinical Narrative","authors":["A Ruiz-de-la-Cuadra, JL López-Cuadrado… - 2019 IEEE 32nd …, 2019"],"snippet":"… embedding (Table 1). Name Training Words Size Resource Glo200Ve Non-zero entries [37] 840 B 300 Common Crawl Spanish Billion Word [38] Word2Vec [39] 1.5 B 300 Sensem, Ancora Corpus, OPUS Project, etc. EVEX Word2Vec …","url":["https://ieeexplore.ieee.org/abstract/document/8787434/"]} +{"year":"2019","title":"Sequence-to-sequence Pre-training with Data Augmentation for Sentence Rewriting","authors":["Y Zhang, T Ge, F Wei, M Zhou, X Sun - arXiv preprint arXiv:1909.06002, 2019"],"snippet":"… Specifically, for a correct sentence, a back translation model trained with the public GEC data first generates 10 best outputs; then a 5-gram language model (JunczysDowmunt and Grundkiewicz, 2016) trained on Common …","url":["https://arxiv.org/pdf/1909.06002"]} +{"year":"2019","title":"Sequential Attention-based Network for Noetic End-to-End Response Selection","authors":["Q Chen, W Wang - arXiv preprint arXiv:1901.02609, 2019"],"snippet":"… Embedding Training corpus #Words glove.6B.300d Wikipedia + Gigaword 0.4M glove.840B.300d Common Crawl 2.2M glove.twitter.27B.200d Twitter 1.2M … 1.0M crawl-300d-2M.vec Common Crawl 2.0M word2vec.300d Linux manual pages 0.3M …","url":["https://arxiv.org/pdf/1901.02609"]} +{"year":"2019","title":"Sequential Matching Model for End-to-end Multi-turn Response Selection","authors":["Q Chen, W Wang - ICASSP 2019-2019 IEEE International Conference on …, 2019"],"snippet":"… Re- sults on the Ubuntu development set are shown in Table 3. We can see that word2vec embedding trained on the training dataset achieves better results than Fasttext [23] embedding trained on the unlabeled corpus …","url":["https://ieeexplore.ieee.org/abstract/document/8682538/"]} +{"year":"2019","title":"Sequential transfer learning in NLP for text summarization","authors":["P Fecht"],"snippet":"… With W and ˜W, the model generates two sets of word vectors which are supposed to perform equally if X is symmetric [64]. The GloVe model has been trained on varying sized datasets from one up to 42 billion (Common Crawl) tokens of data …","url":["https://www.inovex.de/fileadmin/files/Fachartikel_Publikationen/Theses/sequential-transfer-learning-in-nlp-for-text-summarization-pascal-fecht-2019.pdf"]} +{"year":"2019","title":"Should John Be More Likely A Physician Than Lisa: Bias-Performance Trade-Off for Gendered Pronoun Resolution","authors":["S Goel, J Li, H Zheng"],"snippet":"… the female gendered words. For our case, we are using the pre-trained Glove6 (these cotain 840B tokens and are trained on the Common Crawl corpus) embeddings to get the hard-debiased em- beddings. To obtain these …","url":["https://shivankgoel.github.io/notes/ds/Gendered_Pronoun_Resolution.pdf"]} +{"year":"2019","title":"Similarity Driven Approximation for Text Analytics","authors":["G Hu, Y Zhang, S Rigo, TD Nguyen - arXiv preprint arXiv:1910.07144, 2019"],"snippet":"… For example, the Google Books Ngram data set contains 2.2 TB of data [1], and the Common Crawl corpus contains petabytes of data [2]. Processing such large text data sets can be computationally expensive, especially if it involves sophisticated algorithms …","url":["https://arxiv.org/pdf/1910.07144"]} +{"year":"2019","title":"Situating Sentence Embedders with Nearest Neighbor Overlap","authors":["LH Lin, NA Smith - arXiv preprint arXiv:1909.10724, 2019"],"snippet":"… GloVe average 100 Wikipedia 2014 + Gigaword 5 (6B tokens, uncased) 300 Wikipedia 2014 + Gigaword 5 (6B tokens, uncased) 300 Common Crawl (840B tokens, cased) FastText average 300 Wikipedia + UMBC + statmt.org …","url":["https://arxiv.org/pdf/1909.10724"]} +{"year":"2019","title":"Six dimensions describe action understanding: the ACT-FASTaxonomy","authors":["MA Thornton, D Tamir, PS Hall - PsyArXiv. June, 2019"],"snippet":"… different algorithm. For the present purposes, we used a pre-trained version of GloVe based on the Common Crawl: a set of 840 billion tokens generated by scraping the entire web. For model comparison, we derived an …","url":["https://psyarxiv.com/gt6bw/download/?format=pdf"]} +{"year":"2019","title":"SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization","authors":["H Jiang, P He, W Chen, X Liu, J Gao, T Zhao - arXiv preprint arXiv:1911.03437, 2019"],"snippet":"… For example, the well-known “Common Crawl project” is producing text data extracted from web pages at a rate of about 20TB per month. The resulting extremely large text corpus allows us to train extremely large neural network-based general language models …","url":["https://arxiv.org/pdf/1911.03437"]} +{"year":"2019","title":"Social Relation Extraction from Chatbot Conversations: A Shortest Dependency Path Approach","authors":["M Glas - SKILL 2019-Studierendenkonferenz Informatik, 2019"],"snippet":"… The dictionary used here, 5 https://github.com/zalandoresearch/flair 6 https://spacy.io/ 7 http://commoncrawl.org/ 8 https://catalog.ldc.upenn.edu/LDC2013T19 Page 8. 8 Markus Glas Fig. 3: Example of a dependency path within a sentence containing two entities …","url":["https://dl.gi.de/bitstream/handle/20.500.12116/28989/SKILL2019-01.pdf?sequence=1"]} +{"year":"2019","title":"Social Sensing for Improving the User Experience in Orienteering","authors":["F Persia, S Helmer, S Pugacs, G Pilato - 2019 IEEE 13th International Conference on …, 2019"],"snippet":"… ing. In particular, we have used the spaCy “en core web md” language model, which is an “English multi-task Convolutional Neural Network trained on OntoNotes [34], with GloVe [35] vectors trained on Common Crawl [36]” …","url":["https://ieeexplore.ieee.org/abstract/document/8665498/"]} +{"year":"2019","title":"SOK: A Comprehensive Reexamination of Phishing Research from the Security Perspective","authors":["A Das, S Baki, AE Aassal, R Verma, A Dunbar - arXiv preprint arXiv:1911.00953, 2019"],"snippet":"Page 1. REEXAMINING PHISHING RESEARCH 1 SOK: A Comprehensive Reexamination of Phishing Research from the Security Perspective Avisha Das, Shahryar Baki, Ayman El Aassal, Rakesh Verma, and Arthur Dunbar …","url":["https://arxiv.org/pdf/1911.00953"]} +{"year":"2019","title":"Sparse Victory–A Large Scale Systematic Comparison of Count-Based and Prediction-Based Vectorizers for Text Classification","authors":["R Chakraborty, K Arora, A Elhence"],"snippet":"… Corpus (100 billion words). For greater ease of comparison both the GloVe and fastText models have a dimension of 300 and have been trained on the Common Crawl Corpus (640 billion words). The ELMo embedding has …","url":["https://acl-bg.org/proceedings/2019/RANLP%202019/pdf/RANLP022.pdf"]} +{"year":"2019","title":"ST-Sem: A Multimodal Method for Points-of-Interest Classification Using Street-Level Imagery","authors":["SS Noorian, A Psyllidis, A Bozzon - International Conference on Web Engineering, 2019"],"snippet":"… representing each word as a bag of character n-grams. We use pre-trained word vectors for 2 languages (English and German), trained on Common Crawl and Wikipedia 6 . According to the detected language l, the corresponding pre …","url":["https://link.springer.com/chapter/10.1007/978-3-030-19274-7_3"]} +{"year":"2019","title":"STAR-GCN: Stacked and Reconstructed Graph Convolutional Networks for Recommender Systems","authors":["J Zhang, X Shi, S Zhao, I King - arXiv preprint arXiv:1905.13129, 2019"],"snippet":"… For movie features, we concatenate the title name, release year, and one-hot encoded genres. We process title names by averaging the off-the-shelf 300-dimensional GloVe CommonCrawl word vector [Pennington et al., 2014] of each word …","url":["https://arxiv.org/pdf/1905.13129"]} +{"year":"2019","title":"STD: An Automatic Evaluation Metric for Machine Translation Based on Word Embeddings","authors":["P Li, C Chen, W Zheng, Y Deng, F Ye, Z Zheng - IEEE/ACM Transactions on Audio …, 2019"],"snippet":"… H and M are their means respectively. The word embedding used in our STD implementation is the freely-available fastText word embedding1 [11], which has 2 million word vectors trained on Common Crawl (600B tokens) …","url":["https://ieeexplore.ieee.org/abstract/document/8736840/"]} +{"year":"2019","title":"Streaming Infrastructure and Natural Language Modeling with Application to Streaming Big Data","authors":["Y Du - 2019"],"snippet":"… In our research, we try to find an alternative resource to study such data. Common Crawl is a massive multi-petabyte dataset hosted by Amazon. It contains archived HTML web page data from 2008 to date. Common …","url":["https://tigerprints.clemson.edu/all_dissertations/2329/"]} +{"year":"2019","title":"Structured Two-Stream Attention Network for Video Question Answering","authors":["L Gao, P Zeng, J Song, YF Li, W Liu, T Mei, HT Shen - Proceedings of the AAAI …, 2019"],"snippet":"… consisting of M words, is first converted into a sequence Q = {qm}M m=1, where qm is a one-hot vector representing the word at position m. Next, we employ the word embedding GloVe (Pennington, Socher, and Manning …","url":["https://www.aaai.org/ojs/index.php/AAAI/article/view/4602/4480"]} +{"year":"2019","title":"Study of Tibetan Text Classification based on fastText","authors":["W Ma, H Yu, J Ma - 3rd International Conference on Computer Engineering …"],"snippet":"… Every single text in all data is a line, and the \"__label__ + tag\" is added at the beginning of each line. Pre-training data set: fastText publishes word vectors in 157 languages [13], which are trained on Common Crawl and Wikipedia using fastText …","url":["https://download.atlantis-press.com/article/125913150.pdf"]} +{"year":"2019","title":"SUBMISSION OF WRITTEN WORK","authors":["O ERSITY, F CO"],"snippet":"Page 1. IT U N IV ERSITY O F CO PEN H A G EN SUBMISSION OF WRITTEN WORK Class code: Name of course: Course manager: Course e-portfolio: Thesis or project title: Supervisor: Full Name: Birthdate (dd/mm-yyyy) …","url":["http://www.derczynski.com/itu/docs/Multilingual%20hate%20speech%20detection.pdf","https://www.derczynski.com/itu/docs/Multilingual%20hate%20speech%20detection.pdf"]} +{"year":"2019","title":"Subword-based Compact Reconstruction of Word Embeddings","authors":["S Sasaki, J Suzuki, K Inui - Proceedings of the 2019 Conference of the North …, 2019"],"snippet":"… or embedding vectors), especially those trained on a vast amount of text data, such as the Common Crawl (CC) cor … word embeddings trained from GloVe.840B and fastText.600B are available: https://github.com/losyer …","url":["https://www.aclweb.org/anthology/N19-1353"]} +{"year":"2019","title":"SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems","authors":["A Wang, Y Pruksachatkun, N Nangia, A Singh…"],"snippet":"… We also include a baseline where for each task we simply predict the majority class, as well as a bag-of-words baseline where each input is represented as an average of its tokens' GloVe word vectors (300-dimensional and trained …","url":["https://w4ngatang.github.io/static/papers/superglue.pdf"]} +{"year":"2019","title":"Supervised Multimodal Bitransformers for Classifying Images and Text","authors":["D Kiela, S Bhooshan, H Firooz, D Testuggine - arXiv preprint arXiv:1909.02950, 2019"],"snippet":"… We describe each of the baselines in more detail below. • Bag of words (Bow) We sum 300-dimensional GloVe embeddings (Pennington, Socher, and Manning 2014) (trained on Common Crawl) for all words in the text …","url":["https://arxiv.org/pdf/1909.02950"]} +{"year":"2019","title":"Supplementary Material for “Multi-task Learning of Hierarchical Vision-Language Representation”","authors":["DK Nguyen, T Okatani"],"snippet":"… Questions and captions were tokenized using Python Natural Language Toolkit (nltk) [2]. We used the vocabulary provided by the CommonCrawl-840B GloVe model for English word vectors [8], and set out-of-vocabulary words to unk …","url":["https://pdfs.semanticscholar.org/83a6/fd8eadd36c22bdac861bd2b20aba87968c3d.pdf"]} +{"year":"2019","title":"Survey on Publicly Available Sinhala Natural Language Processing Tools and Research","authors":["N de Silva - arXiv preprint arXiv:1906.02358, 2019"],"snippet":"… [21] further provided two monolingual corpora for Sinhala. Those were a 155k+ sentences of filtered Sinhala Wikipedia8 and 5178k+ sentences of Sinhala common crawl9. 2.2 Data Sets Specific data sets for Sinhala, as expected. is scarce …","url":["https://arxiv.org/pdf/1906.02358"]} +{"year":"2019","title":"Synchronous Bidirectional Neural Machine Translation","authors":["L Zhou, J Zhang, C Zong - Transactions of the Association for Computational …, 2019"],"snippet":"Create a new account. Email. Returning user. Can't sign in? Forgot your password? Enter your email address below and we will send you the reset instructions. Email. Cancel. If the address matches an existing account you will …","url":["https://www.mitpressjournals.org/doi/full/10.1162/tacl_a_00256"]} +{"year":"2019","title":"Syntactic dependencies correspond to word pairs with high mutual information","authors":["R Futrell, P Qian, E Gibson, E Fedorenko, IA Blank"],"snippet":"… 2.5 Dataset We use the Common Crawl corpus (Buck et al., 2014) of English web text … Entropy, 19:275–307. Buck, C., Heafield, K., and Van Ooyen, B. (2014). N-gram counts and language models from the common crawl. In …","url":["http://socsci.uci.edu/~rfutrell/papers/futrell2019syntactic.pdf"]} +{"year":"2019","title":"Syntactically Supervised Transformers for Faster Neural Machine Translation","authors":["N Akoury, K Krishna, M Iyyer - arXiv preprint arXiv:1906.02780, 2019"],"snippet":"… For English-German, we evaluate on WMT 2014 En↔De as well as IWSLT 2016 En→De, while for English-French we train on the Europarl / Common Crawl subset of the full WMT 2014 En→Fr data and evaluate over the full dev/test sets …","url":["https://arxiv.org/pdf/1906.02780"]} +{"year":"2019","title":"Syntax-aware Multilingual Semantic Role Labeling","authors":["S He, Z Li, H Zhao - arXiv preprint arXiv:1909.00310, 2019"],"snippet":"… The pre-trained word em- bedding is 100-dimensional GloVe vectors (Pennington et al., 2014) for English, 300-dimensional fastText vectors (Grave et al., 2018) trained on Common Crawl and Wikipedia for other languages …","url":["https://arxiv.org/pdf/1909.00310"]} +{"year":"2019","title":"Syntax-Aware Sentence Matching with Graph Convolutional Networks","authors":["Y Lei, Y Hu, X Wei, L Xing, Q Liu - International Conference on Knowledge Science …, 2019"],"snippet":"… 4.2 Experiment Setting. In order to compare with the baseline, we use the same setting as BiMPM. We initialize word embeddings in the word representation layer with the 300-dimensional GloVe word vectors …","url":["https://link.springer.com/chapter/10.1007/978-3-030-29563-9_31"]} +{"year":"2019","title":"System and method for chat community question answering","authors":["N Londhe, S Kannan, N Bojja - US Patent App. 16/272,142, 2019"],"snippet":"US20190260694A1 - System and method for chat community question answering - Google Patents. System and method for chat community question answering. Download PDF Info. Publication number US20190260694A1. US20190260694A1 …","url":["https://patentimages.storage.googleapis.com/0c/f5/b6/7687c26806b141/US20190260694A1.pdf"]} +{"year":"2019","title":"System and method for concise display of query results via thumbnails with indicative images and differentiating terms","authors":["TP O'hara - US Patent 10,459,999, 2019"],"snippet":"… grams). In the case of a meta-search engine without access to the underlying indexes, one approach is to use data from the Common Crawl to derive global n-gram counts for TF-IDF and language modeling filtering. Another …","url":["http://www.freepatentsonline.com/10459999.html"]} +{"year":"2019","title":"System for creating a reasoning graph and for ranking of its nodes","authors":["B Agapiev - US Patent App. 15/793,751, 2019"],"snippet":"… View, Calif. and Common Crawl Foundation of Beverly Hills, Calif.) are processed (20) to identify statements of causal relationships (22, 24), which are then analyzed to extract causes and associated effect pairs (26). These …","url":["https://patentimages.storage.googleapis.com/ca/d2/fd/8b3a7f8fa4ec15/US20190073420A1.pdf"]} +{"year":"2019","title":"TüBa-D/DP Stylebook","authors":["D de Kok, S Pütz - 2019"],"snippet":"… Table 1: Subcorpora of the TüBa-D/DP. Subcorpus Genre Sentences Tokens Europarl Parliamentary proceedings 2.2M 55M taz (1986-2009) Newspaper 29.9M 393.7M Wikipedia (2019) Encyclopedia 42.2M …","url":["https://sfb833-a3.github.io/tueba-ddp/stylebook/stylebook-r4.pdf"]} +{"year":"2019","title":"TabbyXL: Rule-Based Spreadsheet Data Extraction and Transformation","authors":["A Shigarov, V Khristyuk, A Mikhailov, V Paramonov - International Conference on …, 2019"],"snippet":"… a spreadsheet-like format. Barik et al. [2] extracted 0.25M unique spreadsheets from Common Crawl 1 archive. Chen and Cafarella [6] reported about 0.4M spreadsheets of ClueWeb09 Crawl 2 archive. Spreadsheets can be …","url":["https://link.springer.com/chapter/10.1007/978-3-030-30275-7_6"]} +{"year":"2019","title":"Tackling Graphical NLP problems with Graph Recurrent Networks","authors":["L Song - 2019"],"snippet":"Page 1. Tackling Graphical NLP problems with Graph Recurrent Networks by Linfeng Song Submitted in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy Supervised by Professor Daniel Gildea Department of Computer Science …","url":["https://www.cs.rochester.edu/~lsong10/papers/Linfeng_Song_PhD_thesis.pdf"]} +{"year":"2019","title":"TARGER: Neural Argument Mining at Your Fingertips","authors":["A Chernodub, O Oliynyk, P Heidenreich, A Bondarenko…"],"snippet":"… Our background collection for the retrieval of argumentative sentences is formed by the DepCC corpus (Panchenko et al., 2018), a linguistically pre-processed subset of the Common Crawl containing 14.3 … Building a …","url":["https://webis.de/downloads/publications/papers/bondarenko_2019b.pdf"]} +{"year":"2019","title":"Task definition, annotated dataset, and supervised natural language processing models for symptom extraction from unstructured clinical notes","authors":["JM Steinkamp, W Bala, A Sharma, JJ Kantrowitz - Journal of Biomedical Informatics, 2019"],"snippet":"… Our word embeddings consisted of 300-dimensional Global Vectors (GloVe) [35] trained on the web common crawl data set concatenated with 300-dimensional custom trained FastText [28] vectors trained on the entirety …","url":["https://www.sciencedirect.com/science/article/pii/S153204641930276X"]} +{"year":"2019","title":"Team EP at TAC 2018: Automating data extraction in systematic reviews of environmental agents","authors":["A Nowak, P Kunstman - arXiv preprint arXiv:1901.02081, 2019"],"snippet":"… The model architecture is shown in Figure 3. Embeddings layer: Each token is represented by 1452 dimensional vector, consisting of: • 300-dimensional GloVe (Pennington et al., 2014) embedding (cased, trained on 840B tokens from Common Crawl) …","url":["https://arxiv.org/pdf/1901.02081"]} +{"year":"2019","title":"Techniques for Inverted Index Compression","authors":["GE Pibiri, R Venturini - arXiv preprint arXiv:1908.10598, 2019"],"snippet":"Page 1. Techniques for Inverted Index Compression GIULIO ERMANNO PIBIRI, ISTI-CNR, Italy ROSSANO VENTURINI, University of Pisa, Italy The data structure at the core of large-scale search engines is the inverted index …","url":["https://arxiv.org/pdf/1908.10598"]} +{"year":"2019","title":"Tell me you can read me","authors":["CE SUM, T THEOR"],"snippet":"Page 55. Complying with the obligation of transparency imposes indeed on the data controller the prior obligation to determine–deliberately or not, consciously or not–who are the targeted data subjects, and what are they supposed to find intelligible and easily accessible …","url":["https://pdfs.semanticscholar.org/8c2a/8c105a49e59c457c68b8390b49694c4c4c20.pdf#page=55"]} +{"year":"2019","title":"Temporal Context-Aware Representation Learning for Question Routing","authors":["X Zhang, W Cheng, B Zong, Y Chen, J Xu, D Li…"],"snippet":"… The state-of-the-art document embedding model, InferSent [3], is applied to compute the similarity between questions. We use the pre-trained 300-dimensional word vectors from fastText[19], which is trained on Common Crawl containing 600B tokens …","url":["https://xuczhang.github.io/papers/wsdm20_tcqr.pdf"]} +{"year":"2019","title":"Temporally Grounding Language Queries in Videos by Contextual Boundary-aware Prediction","authors":["J Wang, L Ma, W Jiang - arXiv preprint arXiv:1909.05010, 2019"],"snippet":"… 2015) features are adopted for all compared methods. Each word from the query is represented by GloVe (Pennington, Socher, and Manning 2014) word embedding vectors pre-trained on Common Crawl. We set hidden neuron size of LSTM to 512 …","url":["https://arxiv.org/pdf/1909.05010"]} +{"year":"2019","title":"Text Classification Using SVM Enhanced by Multithreading and CUDA","authors":["S Chatterjee, PG Jose, D Datta - International Journal of Modern Education and …, 2019"],"snippet":"Page 1. IJ Modern Education and Computer Science, 2019, 1, 11-23 Published Online January 2019 in MECS (http://www.mecs-press.org/) DOI: 10.5815/ijmecs.2019.01.02 Copyright © 2019 MECS IJ Modern Education and Computer Science, 2019, 1, 11-23 …","url":["http://search.proquest.com/openview/ab6d5a2cbbb23e2cba642a09784b043e/1?pq-origsite=gscholar&cbl=2026674"]} +{"year":"2019","title":"Text Corpus for NLP","authors":["C Room"],"snippet":"… Sep 2019. Common Crawl publishes 240 TiB of uncompressed data from 2.55 billion web pages. Of these, 1 billion URLs were not present in previous crawls. Common Crawl started in 2008. In 2013, they moved from ARC to Web ARChive (WARC) file format …","url":["https://devopedia.org/text-corpus-for-nlp"]} +{"year":"2019","title":"TEXT QUALITY EVALUATION METHODS AND PROCESSES","authors":["AA Pala, A Kagoshima, M Tober - US Patent App. 15/863,408, 2019"],"snippet":"… In one possible implementation, the reference text 2000 can be parts, or the complete version, of Wikipedia, for a given language, or one or more books, or Common Crawl, or any other corpus that consists of human-written high quality text …","url":["http://www.freepatentsonline.com/y2019/0213247.html"]} +{"year":"2019","title":"The AFRL WMT19 Systems: Old Favorites and New Tricks","authors":["J Gwinnup, G Erdmann, T Anderson - Proceedings of the Fourth Conference on …, 2019"],"snippet":"… Corpus Total Retained CommonCrawl 723,256 655,069 newscommentary 290,866 264,089 Yandex 1,000,000 901,307 ParaCrawl 12,061,155 5,173,675 UN2016 11,365,709 9,871,406 Total Lines 25,440,968 16,865,546 …","url":["https://www.aclweb.org/anthology/W19-5318"]} +{"year":"2019","title":"The BEA-2019 Shared Task on Grammatical Error Correction","authors":["C Bryant, M Felice, ØE Andersen, T Briscoe - … Workshop on Innovative Use of NLP for …, 2019"],"snippet":"Page 1. Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 52–75 Florence, Italy, August 2, 2019. c 2019 Association for Computational Linguistics 52 The BEA-2019 …","url":["https://www.aclweb.org/anthology/W19-4406"]} +{"year":"2019","title":"The BLCU System in the BEA 2019 Shared Task","authors":["L Yang, C Wang - Proceedings of the Fourteenth Workshop on Innovative …, 2019"],"snippet":"… JunczysDowmunt and Grundkiewicz (2016); JunczysDowmunt et al. (2018) utilize the Common Crawl corpus to train the language model and pre-train part of the NMT model. Inspired by these studies, we also try to use a monolingual corpus for data augmentation …","url":["https://www.aclweb.org/anthology/W19-4421"]} +{"year":"2019","title":"The Geometry of Culture: Analyzing the Meanings of Class through Word Embeddings","authors":["AC Kozlowski, M Taddy, JA Evans - American Sociological Review, 2019"],"snippet":"We argue word embedding models are a useful tool for the study of culture using a historical analysis of shared understandings of social class as an empirical case. Word embeddings represent semant...","url":["https://journals.sagepub.com/doi/abs/10.1177/0003122419877135"]} +{"year":"2019","title":"The impact of individual audit partners on their clients' narrative disclosures","authors":["C Mauritz, M Nienhaus, C Oehler - 2019"],"snippet":"Page 1. The impact of individual audit partners on their clients' narrative disclosures ∗ Christoph Mauritz1, Martin Nienhaus2, and Christopher Oehler2 1University of Münster 2Goethe-University Frankfurt September 5, 2019 Abstract …","url":["http://www.geaba.de/wp-content/uploads/2019/09/Mauritz-Nienhaus-Oehler_2.pdf"]} +{"year":"2019","title":"The LAIX Systems in the BEA-2019 GEC Shared Task","authors":["R Li, C Wang, Y Zha, Y Yu, S Guo, Q Wang, Y Liu… - … on Innovative Use of NLP for …, 2019"],"snippet":"… Table 1 lists the data sets used in Restricted Track and Unrestricted Track, including FCE (Yannakoudakis et al., 2011), Lang-82 (Mizumoto et al., 2012), NUCLE (Ng et al., 2014), W&I+LOCNESS (Bryant et al., 2019) and Com …","url":["https://www.aclweb.org/anthology/W19-4416"]} +{"year":"2019","title":"The LIG system for the English-Czech Text Translation Task of IWSLT 2019","authors":["L Vial, B Lecouteux, D Schwab, H Le, L Besacier - arXiv preprint arXiv:1911.02898, 2019"],"snippet":"… C is a speech translation corpus of TED talks, similar to the test data of the task, and we added the News Commentary corpus, which consists of political and economic commentaries, be- cause it was the second smallest corpus …","url":["https://arxiv.org/pdf/1911.02898"]} +{"year":"2019","title":"The Linked Open Data cloud is more abstract, flatter and less linked than you may think!","authors":["L Asprino, W Beek, P Ciancarini, F van Harmelen… - arXiv preprint arXiv …, 2019"],"snippet":"… The two largest available crawls of LOD that are available today are WebDataCommons and LOD-a-lot. WebDataCommons5 [12] consists of ∼31B triples that have been extracted from the CommonCrawl datasets (November 2018 version) …","url":["https://arxiv.org/pdf/1906.08097"]} +{"year":"2019","title":"The NiuTrans Machine Translation Systems for WMT19","authors":["B Li, Y Li, C Xu, Y Lin, J Liu, H Liu, Z Wang, Y Zhang…"],"snippet":"… For EN↔RU, we used the following resource provided by WMT, including News Commentaryv14, ParaCrawl-v3, CommonCrawl and Yandex … corpus via random samplimng from 2M monolingual data selected by Xenc in the …","url":["http://nlplab.com/members/xiaotong_files/2019-wmt.pdf"]} +{"year":"2019","title":"The Quest to Automate Fact-checking","authors":["C Li"],"snippet":"… The model contains 300-dimensional vectors for 3 million words and phrases. https://code.google.com/archive/p/word2vec/ 2: Global Vectors for Word Representation using The Common Crawl corpus which contains …","url":["https://pdfs.semanticscholar.org/13e0/ef9f40c767060b510e2aa75740a3eda60ad4.pdf"]} +{"year":"2019","title":"The relationship between implicit intergroup attitudes and beliefs","authors":["B Kurdi, TC Mann, TES Charlesworth, MR Banaji - Proceedings of the National …, 2019"],"snippet":"Skip to main content. Submit; About: Editorial Board; PNAS Staff; FAQ; Rights and Permissions; Site Map. Contact; Journal Club; Subscribe: Subscription Rates; Subscriptions FAQ; Open Access; Recommend PNAS to Your …","url":["https://www.pnas.org/content/early/2019/02/26/1820240116.short"]} +{"year":"2019","title":"The RWTH Aachen University Machine Translation Systems for WMT 2019","authors":["J Rosendahl, C Herold, Y Kim, M Graça, W Wang… - Proceedings of the Fourth …, 2019"],"snippet":"… For De→En, we use data from CommonCrawl, Europarl, NewsCommentary and Rapid … (2017)), but without tied embedding weights, on the data from CommonCrawl, Europarl, NewsCommentary and Rapid ie about 6M sentence pairs …","url":["https://www.aclweb.org/anthology/W19-5338"]} +{"year":"2019","title":"The Semantic Web: Two Decades On","authors":["A Hogan"],"snippet":"Page 1. Semantic Web 0 (0) 1 1 IOS Press 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 10 10 11 11 12 12 13 13 14 14 15 15 16 16 17 17 18 18 19 19 20 20 21 21 22 22 23 23 24 24 25 25 26 26 27 27 28 28 29 29 30 30 31 31 32 32 33 …","url":["http://www.semantic-web-journal.net/system/files/swj2303.pdf"]} +{"year":"2019","title":"The Source-Target Domain Mismatch Problem in Machine Translation","authors":["J Shen, PJ Chen, M Le, J He, J Gu, M Ott, M Auli… - arXiv preprint arXiv …, 2019"],"snippet":"… For Myanmar monolingual data, we use the language split Commoncrawl data from (Buck et al., 2014) which includes texts in various domains crawled from the web. We use the myanmar-tools2 library to classify and convert all Zawgyi text to Unicode …","url":["https://arxiv.org/pdf/1909.13151"]} +{"year":"2019","title":"The TALP-UPC Machine Translation Systems for WMT19 News Translation Task: Pivoting Techniques for Low Resource MT","authors":["N Casas, JAR Fonollosa, C Escolano, C Basta… - Proceedings of the Fourth …, 2019"],"snippet":"… 4.2 English-Russian The available parallel English-Russian corpora for the shared task included News Commentary v14, Wiki Titles v1, Common Crawl corpus, ParaCrawl v3, Yandex Corpus and the United Nations Parallel Corpus v1.0 (Ziemski et al., 2016) …","url":["https://www.aclweb.org/anthology/W19-5311"]} +{"year":"2019","title":"The Universitat d'Alacant submissions to the English-to-Kazakh news translation task at WMT 2019","authors":["VM Sánchez-Cartagena, JA Pérez-Ortiz…"],"snippet":"… 556 corpus lang. raw cleaned News Crawl kk 783k 783k Wiki dumps kk 1.7M 1.7M Common Crawl kk 10.9M 5.4M News Crawl en 200M 200M … The same filtering was applied to the monolingual Kazakh Common Crawl corpus …","url":["https://www.dlsi.ua.es/~fsanchez/pub/pdf/sanchez-cartagena19a.pdf"]} +{"year":"2019","title":"The University of Helsinki submissions to the WMT19 news translation task","authors":["A Talman, U Sulubacak, R Vázquez, Y Scherrer… - arXiv preprint arXiv …, 2019"],"snippet":"… removing all sentence pairs with a length difference ratio above a certain threshold: for CommonCrawl, ParaCrawl and Rapid we used a threshold of 3, for WikiTitles a threshold of 2, and for all other data sets a threshold of 9; …","url":["https://arxiv.org/pdf/1906.04040"]} +{"year":"2019","title":"The University of Sydney's Machine Translation System for WMT19","authors":["L Ding, D Tao - arXiv preprint arXiv:1907.00494, 2019"],"snippet":"… 3 Data Preparation We used all available parallel corpus 3 for Finnish→ English except the “Wiki Headlines” due to the large number of incomplete sentences, and for monolingual target side English data, we selected all …","url":["https://arxiv.org/pdf/1907.00494"]} +{"year":"2019","title":"The Web is missing an essential part of infrastructure: an Open Web Index","authors":["D Lewandowski - arXiv preprint arXiv:1903.03846, 2019"],"snippet":"… A search engine needs to keep its index current, meaning it needs to update at least a part of it every minute. This is an important requirement that is not being met by any of the current projects (like Common Crawl) …","url":["https://arxiv.org/pdf/1903.03846"]} +{"year":"2019","title":"TiFi: Taxonomy Induction for Fictional Domains [Extended version]","authors":["CX Chu, S Razniewski, G Weikum - arXiv preprint arXiv:1901.10263, 2019"],"snippet":"Page 1. TiFi: Taxonomy Induction for Fictional Domains [Extended version] ∗ Cuong Xuan Chu Max Planck Institute for Informatics Saarbrücken, Germany cxchu@mpi-inf. mpg.de Simon Razniewski Max Planck Institute for Informatics …","url":["https://arxiv.org/pdf/1901.10263"]} +{"year":"2019","title":"TLR at BSNLP2019: A Multilingual Named Entity Recognition System","authors":["JG Moreno, EL Pontes, M Coustaty, A Doucet - Proceedings of the 7th Workshop on …, 2019"],"snippet":"… in Figure 1. 3.1 FastText Embedding In this layer, we used pre-trained embeddings for each language trained on Common Crawl and Wikipedia using fastText (Bojanowski et al., 2017; Grave et al., 2018). These models were …","url":["https://www.aclweb.org/anthology/W19-3711"]} +{"year":"2019","title":"TMU Transformer System Using BERT for Re-ranking at BEA 2019 Grammatical Error Correction on Restricted Track","authors":["M Kaneko, K Hotate, S Katsumata, M Komachi - … Workshop on Innovative Use of NLP …, 2019"],"snippet":"… The 5-gram language model for re-ranking was trained on a subset of the Common Crawl corpus (Chollampatt and Ng, 2018a).5 We used a Python spell checker tool6 on the GEC model hy- pothesis sentences. 3.3 Evaluation …","url":["https://www.aclweb.org/anthology/W19-4422"]} +{"year":"2019","title":"Top-K Attention Mechanism for Complex Dialogue System","authors":["CU Shina, JW Chab - 2019"],"snippet":"… Then, the model submit the candidate with the highest value among the given candidates as the final correct an- swer. They randomly sampled one of the 99 negative samples to prevent bias during learning and used …","url":["http://workshop.colips.org/dstc7/papers/33.pdf"]} +{"year":"2019","title":"Toponym Identification in Epidemiology Articles--A Deep Learning Approach","authors":["MR Davari, L Kosseim, TD Bui - arXiv preprint arXiv:1904.11018, 2019"],"snippet":"… In order to measure the effect of such domain specific information, we experimented with 2 other pretrained word embedding models: Google News Word2vec [11], and a GloVe Model trained on Common Crawl [24] … Common Crawl GloVe 2.2M 300 29.84 …","url":["https://arxiv.org/pdf/1904.11018"]} +{"year":"2019","title":"Toward Automated Worldwide Monitoring of Network-Level Censorship","authors":["Z Weinberg - 2018"],"snippet":"Page 1. Toward Automated Worldwide Monitoring of Network-level Censorship Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Electrical and Computer Engineering Zachary Weinberg BA Chemistry, Columbia University …","url":["http://search.proquest.com/openview/11a5908644ea63a6b01b3f0c4d23ce4e/1?pq-origsite=gscholar&cbl=18750&diss=y"]} +{"year":"2019","title":"Toward Gender-Inclusive Coreference Resolution","authors":["YT Cao, H Daumé III - arXiv preprint arXiv:1910.13913, 2019"],"snippet":"Page 1. Toward Gender-Inclusive Coreference Resolution YANG TRISTA CAO, University of Maryland HAL DAUMÉ III, Microsoft Research & University of Maryland ABSTRACT Correctly resolving textual mentions of people …","url":["https://arxiv.org/pdf/1910.13913"]} +{"year":"2019","title":"Towards a Global Perspective on Web Tracking","authors":["N Samarasinghe, M Mannan - Computers & Security, 2019"],"snippet":"… Schelter et al. Schelter and Kunegis (2016) performed a large scale analysis of third-party trackers using the Common Crawl 2012 corpus. The corpus may contain tracking information of residential as well as institutional users …","url":["https://www.sciencedirect.com/science/article/pii/S0167404818314007"]} +{"year":"2019","title":"Towards an Automated Extraction of ABAC Constraints from Natural Language Policies","authors":["M Alohaly, H Takabi, E Blanco - IFIP International Conference on ICT Systems …, 2019"],"snippet":"… model. To configure the model, we set one hyper-parameter value at a time. Our default settings: dropout = 0, decay rate = 0, number of BiLSTM cells (ie, layers) = 1, and GloVe (Common crawl) with 300 dimensions. To determine …","url":["https://link.springer.com/chapter/10.1007/978-3-030-22312-0_8"]} +{"year":"2019","title":"Towards an automated method to assess data portals in the deep web","authors":["AS Correa, RM de Souza, FSC da Silva - Government Information Quarterly, 2019"],"snippet":"Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S0740624X18305185"]} +{"year":"2019","title":"Towards Content Expiry Date Determination: Predicting Validity Periods of Sentences","authors":["A Almquist, A Jatowt2r0000"],"snippet":"… For this, we use Common Crawl dataset 16 which is a web dump composed of billions of websites with plain text versions available … For each sentence found in the Common Crawl dataset we identify DATE, TIME and DURATION …","url":["http://www.dl.kuis.kyoto-u.ac.jp/~adam/ecir19a.pdf"]} +{"year":"2019","title":"Towards Content Transfer through Grounded Text Generation","authors":["RE Dataset","S Prabhumoye, C Quirk, M Galley - arXiv preprint arXiv:1905.05293, 2019"],"snippet":"05/13/19 - Recent work in neural generation has attracted significant interest in controlling the form of text, such as style, persona, and p...","url":["https://arxiv.org/pdf/1905.05293","https://deepai.org/publication/towards-content-transfer-through-grounded-text-generation"]} +{"year":"2019","title":"Towards countering hate speech and personal attack in social media","authors":["P Charitidis, S Doropoulos, S Vologiannidis… - arXiv preprint arXiv …, 2019"],"snippet":"… each language. After conducting some preliminary experiments, the best pre-trained embedding choice for Greek and French language was using fastText embeddings [45], trained on Common Crawl and Wikipedia. For English …","url":["https://arxiv.org/pdf/1912.04106"]} +{"year":"2019","title":"Towards Functionally Similar Corpus Resources for Translation","authors":["M Kunilovskaya, S Sharoff"],"snippet":"… Secondly, we used lemmatised texts, with stop words filtered out (biLSTMlex in Table 1). For both scenarios we used pre-trained word embeddings of size 300, trained on the English Wikipedia and CommonCrawl data, using …","url":["http://corpus.leeds.ac.uk/serge/publications/2019-RANLP.pdf"]} +{"year":"2019","title":"Towards Multimodal Emotion Recognition in German Speech Events in Cars using Transfer Learning","authors":["D Cevher, S Zepf, R Klinger - arXiv preprint arXiv:1909.02764, 2019"],"snippet":"… We use a neural network with an embedding layer (frozen weights, pretrained on Common Crawl and Wikipedia (Grave et al., 2018)), a bidirectional LSTM (Schuster and Paliwal, 1997), and two dense layers followed by a soft max output layer …","url":["https://arxiv.org/pdf/1909.02764"]} +{"year":"2019","title":"Towards Multimodal Sarcasm Detection (An _Obviously_ Perfect Paper)","authors":["S Castro, D Hazarika, V Pérez-Rosas, R Zimmermann… - arXiv preprint arXiv …, 2019"],"snippet":"… 768. We also considered averaging Common Crawl pre-trained 300 dimensional GloVe word vectors (Pennington et al., 2014) for each token; however, it resulted in lower performance as compared to BERT-based features …","url":["https://arxiv.org/pdf/1906.01815"]} +{"year":"2019","title":"Towards Non-task-specific Distillation of BERT via Sentence Representation Approximation","authors":["B Wu, H Zhang, M Li, Z Wang, Q Feng, J Huang… - arXiv preprint arXiv …, 2020","HZ Bowen Wu, M Li, Z Wang, Q Feng, J Huang…"],"snippet":"… paper. 4.3 Hyperparameters For the student model in our proposed distilling method, we employ the 300-dimension GloVe (840B Common Crawl version; Pennington et al., 2014) to initialize the word embeddings. The number …","url":["https://arxiv.org/pdf/2004.03097","https://www.researchgate.net/profile/Bowen_Wu10/publication/337113946_Towards_Non-task-specific_Distillation_of_BERT_via_Sentence_Representation_Approximation/links/5dc5cffc4585151435f7df39/Towards-Non-task-specific-Distillation-of-BERT-via-Sentence-Representation-Approximation.pdf"]} +{"year":"2019","title":"Towards Robust Named Entity Recognition for Historic German","authors":["S Schweter, J Baiter - arXiv preprint arXiv:1906.07592, 2019"],"snippet":"… 69.59% Common Crawl 68.97% Wikipedia + Common Crawl 72.00% Wikipedia + Common Crawl + Character 74.50 … 69.62% Riedl and Padó (2018) (with transfer-learning) 74.33% ONB Wikipedia 75.80% CommonCrawl 78.70% Wikipedia + CommonCrawl 79.46 …","url":["https://arxiv.org/pdf/1906.07592"]} +{"year":"2019","title":"Towards semantic-rich word embeddings","authors":["G Beringer, M Jabłonski, P Januszewski, A Sobecki…"],"snippet":"… collected (III), for the our approach. We use a pretrained embedding model from spaCy - en_vectors_web_lg, which contains 300-dimensional word vectors trained on Common Crawl with GloVe2. We compare results on the …","url":["https://annals-csis.org/Volume_18/drp/pdf/120.pdf"]} +{"year":"2019","title":"Towards Unsupervised Grammatical Error Correction using Statistical Machine Translation with Synthetic Comparable Corpus","authors":["S Katsumata, M Komachi - arXiv preprint arXiv:1907.09724, 2019"],"snippet":"… makes up for the synthetic target data. To compare the fluency, the outputs of each best iter on JFLEG were evaluated with the perplexity based on the Common Crawl language model10. The perplexity of USMTforward in iter …","url":["https://arxiv.org/pdf/1907.09724"]} +{"year":"2019","title":"Tracking Naturalistic Linguistic Predictions with Deep Neural Language Models","authors":["M Heilbron, B Ehinger, P Hagoort, FP de Lange - arXiv preprint arXiv:1909.04400, 2019"],"snippet":"… Non-predictive controls We included two non-predictive and potentially confounding variables: first, frequency which we quantified as unigram surprise (−log p(w)) which was based on a word's lemma count in the CommonCrawl corpus, obtained via spaCy …","url":["https://arxiv.org/pdf/1909.04400"]} +{"year":"2019","title":"Transfer Learning across Languages from Someone Else's NMT Model","authors":["T Kocmi, O Bojar - arXiv preprint arXiv:1909.10955, 2019"],"snippet":"… WMT 2012 WMT 2018 English - French Commoncrawl, Europarl, Giga FREN, News commentary, UN corpus WMT 2013 WMT dis. 2015 … Based on our previous experiments, we ex- clude the noisiest corpus, ie web crawled ParaCrawl or Commoncrawl …","url":["https://arxiv.org/pdf/1909.10955"]} +{"year":"2019","title":"Transfer Learning from Transformers to Fake News Challenge Stance Detection (FNC-1) Task","authors":["V Slovikovskaya - arXiv preprint arXiv:1910.14353, 2019"],"snippet":"… 9XLNet is named after TransformerXL 10These corpora include (1) BOOK CORPUS [Zhu et al., 2015] plus English Wikipedia, the original data used to train BERT (16GB); (2) CC-NEWS, which authors collected from the English …","url":["https://arxiv.org/pdf/1910.14353"]} +{"year":"2019","title":"Transforma at SemEval-2019 Task 6: Offensive Language Analysis using Deep Learning Architecture","authors":["R Ong - arXiv preprint arXiv:1903.05280, 2019"],"snippet":"… This allows us to evaluate the increase in di- mensionality on the performance of our models 3. GloVe: Common Crawl (300d) - Trained on 42B tokens, 1.9M vocabulary of unique words … Table 7: T - GloVe Twitter, CC - GloVe Common Crawl …","url":["https://arxiv.org/pdf/1903.05280"]} +{"year":"2019","title":"transformers. zip: Compressing Transformers with Pruning and Quantization","authors":["R Cheong, R Daniel - 2019"],"snippet":"… 9: return M 4 Page 5. 4 Experiments 4.1 Dataset We train and evaluate on the WMT English - German translation task. Specifically, we train on all of Europarl, Common Crawl, and News Commentary, validate on the …","url":["https://pdfs.semanticscholar.org/fe82/735fe8ae2163a37aa2787eee0db8efc745b6.pdf"]} +{"year":"2019","title":"Translating Translationese: A Two-Step Approach to Unsupervised Machine Translation","authors":["N Pourdamghani, N Aldarrab, M Ghazvininejad…"],"snippet":"… For Arabic we use MultiUN (Tiedemann, 2012). For French we use CommonCrawl For German we use a mix of CommonCrawl (1.7M), and NewsCommentary (300K) … For Spanish we use CommonCrawl (1.8M), and Europarl (200K) …","url":["https://www.isi.edu/~jonmay/pubs/acl19.pdf"]} +{"year":"2019","title":"Tree Edit Distance Learning via Adaptive Symbol Embeddings","authors":["BPCGA Micheli, B Hammer"],"snippet":"Deep Learning Monitor. Paper Detail. Close This Page. Tree Edit Distance Learning via Adaptive Symbol Embeddings. 2018-06-18 13:54:45; Benjamin Paaßen, Claudio Gallicchio, Alessio Micheli, Barbara Hammer; 0. Abstract …","url":["https://deeplearn.org/arxiv/38595/tree-edit-distance-learning-via-adaptive-symbol-embeddings"]} +{"year":"2019","title":"TU Wien@ TREC Deep Learning'19--Simple Contextualization for Re-ranking","authors":["S Hofstätter, M Zlabinger, A Hanbury - arXiv preprint arXiv:1912.01385, 2019"],"snippet":"… For the full task we generated initial rankings with Anserini using BM25 and utilized the validation sets to tune the re-ranking 1https://github.com/microsoft/BlingFire 242B CommonCrawl lower-cased: https://nlp.stanford.edu/projects/glove …","url":["https://arxiv.org/pdf/1912.01385"]} +{"year":"2019","title":"Twitter Sentiment on Affordable Care Act using Score Embedding","authors":["M Farhadloo - arXiv preprint arXiv:1908.07061, 2019"],"snippet":"… The embeddings pre-trained on Common Crawl data were only available in dimension 300 and were trained on 840 billion tokens with vocabulary … of available unlabeled training data had an impact on the performance …","url":["https://arxiv.org/pdf/1908.07061"]} +{"year":"2019","title":"Two New Evaluation Datasets for Low-Resource Machine Translation: Nepali-English and Sinhala-English","authors":["F Guzmán, PJ Chen, M Ott, J Pino, G Lample, P Koehn… - arXiv preprint arXiv …, 2019"],"snippet":"… M monolingual Wikipedia (en) 67.8M 2.0B Common Crawl (ne) 3.6M 103.0M Wikipedia (ne) 92.3K 2.8M Sinhala–English … 5M monolingual Wikipedia (en) 67.8M 2.0B Common Crawl (si) 5.2M 110.3M Wikipedia (si) 155.9K 4.7M …","url":["https://arxiv.org/pdf/1902.01382"]} +{"year":"2019","title":"Type: Report Dissemination level: Public Due Date (in months): 24 (August 2019)","authors":["CSGOER Network"],"snippet":"Page 1. X Modal X Cultural X Lingual X Domain X Site Global OER Network Grant Agreement Number: 761758 Project Acronym: X5GON Project title: Cross Modal, Cross Cultural, Cross Lingual, Cross Domain, and Cross Site …","url":["https://www.x5gon.org/wp-content/uploads/2019/10/D5.2_afterJSTrev_26Aug19.pdf"]} +{"year":"2019","title":"UdS-DFKI Participation at WMT 2019: Low-Resource (en-gu) and Coreference-Aware (en-de) Systems","authors":["C España-Bonet, D Ruiter - Proceedings of the Fourth Conference on Machine …, 2019"],"snippet":"… proportions. Our base system uses CommonCrawl … x1 Parallel CommonCrawl 2,394,878 x1 x4 Europarl 1,775,445 x1 x4 NewsCommentary 328,059 x4 x16 Rapid 1,105,651 x1 x4 ParaCrawlFiltered 12,424,790 x0 x1 Table …","url":["https://www.aclweb.org/anthology/W19-5315"]} +{"year":"2019","title":"Understanding and Mitigating the Security Risks of Content Inclusion in Web Browsers","authors":["S Arshad - 2019"],"snippet":"… 47 5.1 Sample URL grouping. . . . . 73 5.2 Narrowing down the Common Crawl to the candidate set used in our analysis (from left to right) . . . . 79 5.3 Vulnerable pages and sites in the candidate set …","url":["http://search.proquest.com/openview/5a3bdc0060c7ad7004f26c77dae937c2/1?pq-origsite=gscholar&cbl=18750&diss=y"]} +{"year":"2019","title":"Uni-and Multimodal and Structured Representations for Modeling Frame Semantics","authors":["T Botschen - 2019"],"snippet":"Page 1. Uniand Multimodal and Structured Representations for Modeling Frame Semantics Vom Fachbereich Informatik der Technischen Universität Darmstadt genehmigte Dissertation zur Erlangung des akademischen Grades …","url":["http://tuprints.ulb.tu-darmstadt.de/8484/1/Dissertation_TeresaBotschen.pdf"]} +{"year":"2019","title":"Unified Visual-Semantic Embeddings: Bridging Vision and Language with Structured Meaning Representations","authors":["H Wu, J Mao, Y Zhang, Y Jiang, L Li, W Sun, WY Ma - arXiv preprint arXiv:1904.05521, 2019"],"snippet":"Page 1. Unified Visual-Semantic Embeddings: Bridging Vision and Language with Structured Meaning Representations Hao Wu1,3,4,6,∗,†, Jiayuan Mao5,6,∗,†, Yufeng Zhang2,6,†, Yuning Jiang6, Lei Li6, Weiwei Sun1,3,4, Wei-Ying Ma6 …","url":["https://arxiv.org/pdf/1904.05521"]} +{"year":"2019","title":"Unraveling the Search Space of Abusive Language in Wikipedia with Dynamic Lexicon Acquisition","authors":["WF Chen, K Al-Khatib, M Hagen, H Wachsmuth…"],"snippet":"… The hidden state is employed to predict the probability of 'not-attack' using a linear regression layer. We use 300-dimensional word embeddings (Pennington et al., 2014) pre-trained on the Common Crawl with 840 …","url":["https://webis.de/downloads/publications/papers/stein_2019z.pdf"]} +{"year":"2019","title":"Unsupervised Cross-lingual Representation Learning at Scale","authors":["A Conneau, K Khandelwal, N Goyal, V Chaudhary… - arXiv preprint arXiv …, 2019"],"snippet":"… As shown in Figure 1, the CommonCrawl Corpus that we collected has significantly more monolingual data than the previously used Wikipedia corpora. Figure 3 shows that for the same BERTBase architecture, all models …","url":["https://arxiv.org/pdf/1911.02116"]} +{"year":"2019","title":"Unsupervised Extraction of Partial Translations for Neural Machine Translation","authors":["B Marie, A Fujita - Proceedings of the 2019 Conference of the North …, 2019"],"snippet":"… We extracted monolingual data ourselves from the Common Crawl project8 for Bengali (5.3M lines) and Malay (4.6M lines … 8http://commoncrawl org/ 9https://fasttext.cc/ 10The extraction of 100k partial translations from …","url":["https://www.aclweb.org/anthology/N19-1384"]} +{"year":"2019","title":"Unsupervised Joint Training of Bilingual Word Embeddings","authors":["B Marie, A Fujita - Proceedings of the 57th Conference of the Association …, 2019"],"snippet":"… For en- id, we used English (100M lines) and Indonesian (77M lines) Common Crawl corpora.5 We then mapped the word embeddings into a BWE space using VECMAP,6 one of the best and most robust methods for unsupervised mapping (Glavas et al., 2019) …","url":["https://www.aclweb.org/anthology/P19-1312"]} +{"year":"2019","title":"Unsupervised Lemmatization as Embeddings-Based Word Clustering","authors":["R Rosa, Z Žabokrtský - arXiv preprint arXiv:1908.08528, 2019"],"snippet":"… For the experiments reported in this paper, we use the pretrained word embedding dictionaries available from the FastText website.78 The word embeddings had been trained on Wikipedia9 and Common Crawl10 texts …","url":["https://arxiv.org/pdf/1908.08528"]} +{"year":"2019","title":"Unsupervised Question Answering by Cloze Translation","authors":["P Lewis, L Denoyer, S Riedel - arXiv preprint arXiv:1906.04980, 2019"],"snippet":"… Question Corpus We mine questions from En- glish pages from a recent dump of common crawl using simple selection criteria:3 We select sen … 3http:// commoncrawl.org/ 4We also experimented with language model pretraining …","url":["https://arxiv.org/pdf/1906.04980"]} +{"year":"2019","title":"Updating Pre-trained Word Vectors and Text Classifiers using Monolingual Alignment","authors":["P Bojanowski, O Celebi, T Mikolov, E Grave, A Joulin - arXiv preprint arXiv …, 2019"],"snippet":"… Indeed, despite their size, large web data such as Common Crawl lack coverage for highly technical expert fields such as medicine or law … Training data. We take two subsets of the May 2017 dump of the Common Crawl …","url":["https://arxiv.org/pdf/1910.06241"]} +{"year":"2019","title":"Updating verbal fluency analysis for the 21st century: Applications for psychiatry","authors":["TB Holmlund, J Cheng, PW Foltz, AS Cohen, B Elvevåg - Psychiatry Research, 2019"],"snippet":"… To base the analysis on a corpus with a wide variety of animal-word sources, we used a set of pre-trained word vectors calculated from approximately 42 billion tokens from the entire internet, courtesy of the Common Crawl project (Pennington et al., 2014) …","url":["https://www.sciencedirect.com/science/article/pii/S0165178118324181"]} +{"year":"2019","title":"Using Local Knowledge Graph Construction to Scale Seq2Seq Models to Multi-Document Inputs","authors":["A Fan, C Gardent, C Braud, A Bordes - 2019"],"snippet":"… WikiSum Second, we experiment on the WikiSum CommonCrawl (Liu et al., 2018b) summarization dataset4 with 1.5 million examples … denotes results from (Liu et al., 2018b) that use data scraped from unrestricted web search, not the static CommonCrawl version …","url":["https://hal.archives-ouvertes.fr/hal-02277063/document"]} +{"year":"2019","title":"Using logical form encodings for unsupervised linguistic transformation: Theory and applications","authors":["T Gröndahl, N Asokan - arXiv preprint arXiv:1902.09381, 2019"],"snippet":"Page 1. arXiv:1902.09381v1 [cs.CL] 25 Feb 2019 Using logical form encodings for unsupervised linguistic transformation: Theory and applications Tommi Gröndahl N. Asokan Abstract We present a novel method to architect …","url":["https://arxiv.org/pdf/1902.09381"]} +{"year":"2019","title":"Using the Semantic Web as a source of training data","authors":["C Bizer, A Primpeli, R Peeters - Datenbank-Spektrum, 2019"],"snippet":"… The Web Data Commons (WDC) project 4 monitors the adoption of schema.org annotations on the Web by analysing the Common Crawl 5 , a series of public web corpora each containing several billion HTML pages [12]. The …","url":["https://link.springer.com/article/10.1007/s13222-019-00313-y"]} +{"year":"2019","title":"Using Whole Document Context in Neural Machine Translation","authors":["V Macé, C Servan - arXiv preprint arXiv:1910.07481, 2019"],"snippet":"… models are evaluated on the same standard corpora that have Page 3. Corpora #lines # EN # DE Common Crawl 2.2M 54M 50M Europarl V9† 1.8M 50M 48M News Comm. V14† 338K 8.2M 8.3M ParaCrawl V3 27.5M 569M …","url":["https://arxiv.org/pdf/1910.07481"]} +{"year":"2019","title":"Variational Auto-Decoder: Neural Generative Modeling from Partial Data","authors":["A Zadeh, YC Lim, PP Liang, LP Morency - arXiv preprint arXiv:1903.00840, 2019"],"snippet":"… CMU-MOSEI consists of 23,500 sentences and CMU-MOSI consists of 2199 sentences. For text modality, the datasets contain GloVe word embeddings (Pennington et al., 2014) trained on 840 billion tokens from the Common Crawl dataset …","url":["https://arxiv.org/pdf/1903.00840"]} +{"year":"2019","title":"Vernon-fenwick at SemEval-2019 Task 4: Hyperpartisan News Detection using Lexical and Semantic Features","authors":["V Srivastava, A Gupta, D Prakash, SK Sahoo, RR Rohit… - Proceedings of the 13th …, 2019"],"snippet":"… semantic space. We have used 300-dimensional Glove embeddings trained on Common Crawl data of 2.2 million words and 840 billion tokens. An ar- ticle was tokenized into sentences and further into words to obtain it's article representation …","url":["https://www.aclweb.org/anthology/S19-2189"]} +{"year":"2019","title":"Video Question Answering with Spatio-Temporal Reasoning","authors":["Y Jang, Y Song, CD Kim, Y Yu, Y Kim, G Kim - International Journal of Computer …, 2019"],"snippet":"Page 1. International Journal of Computer Vision https://doi.org/10.1007/s11263-01901189-x Video Question Answering with Spatio-Temporal Reasoning Yunseok Jang1 · Yale Song2 · Chris Dongjoo Kim1 · Youngjae Yu1 · Youngjin Kim1 · Gunhee Kim1 …","url":["https://link.springer.com/article/10.1007/s11263-019-01189-x"]} +{"year":"2019","title":"Vir is to Moderatus as Mulier is to Intemperans Lemma Embeddings for Latin","authors":["R Sprugnoli, M Passarotti, G Moretti"],"snippet":"… Both Facebook and the organizers of the CoNLL shared tasks on multilingual parsing have pre-computed and released word embeddings trained on Latin texts crawled from the web: the former using the fastText model on …","url":["https://www.researchgate.net/profile/Rachele_Sprugnoli/publication/336798734_Vir_is_to_Moderatus_as_Mulier_is_to_Intemperans_Lemma_Embeddings_for_Latin/links/5db2a47e92851c577ec259b4/Vir-is-to-Moderatus-as-Mulier-is-to-Intemperans-Lemma-Embeddings-for-Latin.pdf"]} +{"year":"2019","title":"Vision-based Page Rank Estimation with Graph Networks","authors":["TI Denk, S Güner"],"snippet":"… The Open PageRank initiative provides freely available data that was built on top of Common Crawl [do/19], which provides high quality crawl data of webp ages since 2013. Open PageRank uses the number of backlinks of …","url":["https://www.researchgate.net/profile/Timo_Denk/publication/334824445_Vision-based_Page_Rank_Estimation_with_Graph_Networks/links/5d429cb692851cd04696fd56/Vision-based-Page-Rank-Estimation-with-Graph-Networks.pdf"]} +{"year":"2019","title":"VizNet: Towards A Large-Scale Visualization Learning and Benchmarking Repository","authors":["K Hu, N Gaikwad, M Bakker, M Hulsebos, E Zgraggen…"],"snippet":"… Corpora The first category of corpora includes data tables harvested from the web. In particular, we use horizontal relational tables from the WebTables 2015 corpus [6], which extracts structured tables from the Common Crawl …","url":["https://hci.stanford.edu/~cagatay/projects/viznet/VizNet-CHI19-Submission.pdf"]} +{"year":"2019","title":"Wanca in Korp: Text corpora for underresourced Uralic languages","authors":["H Jauhiainen, T Jauhiainen, K Lindén - DATA AND HUMANITIES (RDHUM) 2019 …"],"snippet":"… In addition to conducting our own crawling, we also used the pre-crawled corpus distributed by the Common Crawl Foundation … 2 In addition to conducting our own crawling, we used the pre-crawled corpus distributed by the Common Crawl Foundation …","url":["https://researchportal.helsinki.fi/files/126205806/Proceedings_RDHum2019.pdf#page=23"]} +{"year":"2019","title":"WDC Product Data Corpus and Gold Standard for Large-Scale Product Matching-Version 2.0","authors":["R Peeters, A Primpeli, C Bizer"],"snippet":"… methods. The Web Data Commons project regularly extracts schema.org annotations from the Common Crawl, a large public web corpus. November 2017 version of the WDC schema.org data set contains 365 million offers …","url":["http://webdatacommons.org/largescaleproductcorpus/v2/"]} +{"year":"2019","title":"Weakly-Supervised Concept-based Adversarial Learning for Cross-lingual Word Embeddings","authors":["H Wang, J Henderson, P Merlo - arXiv preprint arXiv:1904.09446, 2019"],"snippet":"… (2018a)11, we use their pretrained CBOW embeddings of 300 dimensions. For English, Italian and German, the models are trained on the WacKy corpus. The Finnish model is trained from Common Crawl and the Spanish model is trained from WMT News Crawl …","url":["https://arxiv.org/pdf/1904.09446"]} +{"year":"2019","title":"Web Archive Analysis Using Hive and SparkSQL","authors":["X Wang, Z Xie - 2019 ACM/IEEE Joint Conference on Digital Libraries …, 2019"],"snippet":"… Keywords web archive, big data, distributed computation 1 Introduction Web preservation organizations such as Common Crawl or Internet Archive are common sources of web archive data … We use a data set from Common Crawl May 2018 collection …","url":["https://ieeexplore.ieee.org/abstract/document/8791112/"]} +{"year":"2019","title":"Web Engineering: 19th International Conference, ICWE 2019, Daejeon, South Korea, June 11–14, 2019, Proceedings","authors":["M Bakaev"],"snippet":"Page 1. Maxim Bakaev Flavius Frasincar In-Young Ko (Eds.) Web Engineering 19th International Conference, ICWE 2019 Daejeon, South Korea, June 11–14, 2019 Proceedings 123 Page 2. Lecture Notes in Computer Science …","url":["http://books.google.de/books?hl=en&lr=lang_en&id=5R6VDwAAQBAJ&oi=fnd&pg=PR5&dq=commoncrawl&ots=X57GCPV1TC&sig=41aU_I70hr0H-D_h9MbSG1Ruryc"]} +{"year":"2019","title":"Web table integration and profiling for knowledge base augmentation","authors":["O Lehmberg - 2019"],"snippet":"Page 1. Web Table Integration and Profiling for Knowledge Base Augmentation Inauguraldissertation zur Erlangung des akademischen Grades eines Doktors der Naturwissenschaften der Universität Mannheim …","url":["https://madoc.bib.uni-mannheim.de/52346/1/thesis.pdf"]} +{"year":"2019","title":"Web View: Measuring & Monitoring Representative Information on Websites","authors":["A Saverimoutou, B Mathieu, S Vaton - ICIN 2019-QOE-MANAGEMENT 2019, 2019"],"snippet":"… XRay [8] and AdFisher run automated personalization detection experiments and Common Crawl 7 uses an Apache Nutch based crawler … 4http://phantomjs. org/ 5https://www.seleniumhq.org/ 6https://github.com/ghostwords/chameleon …","url":["https://hal.archives-ouvertes.fr/hal-02072471/document"]} +{"year":"2019","title":"WebIsAGraph: A Very Large Hypernymy Graph from a Web Corpus","authors":["F Stefano, I Finocchi, SP Ponzetto, V Paola - Sixth Italian Conference on …, 2019","S Faralli, I Finocchi, SP Ponzetto, P Velardi - 2019"],"snippet":"… Abstract In this paper, we present WebIsAGraph, a very large hypernymy graph compiled from a dataset of is-a relationships ex- tracted from the CommonCrawl … This is because, due to their large size, source input corpora …","url":["https://iris.luiss.it/handle/11385/192535","https://www.researchgate.net/profile/Stefano_Faralli2/publication/336899588_WebIsAGraph_A_Very_Large_Hypernymy_Graph_from_a_Web_Corpus/links/5db9a6c24585151435d5b691/WebIsAGraph-A-Very-Large-Hypernymy-Graph-from-a-Web-Corpus.pdf"]} +{"year":"2019","title":"What a neural language model tells us about spatial relations","authors":["M Ghanimifard, S Dobnik - Proceedings of the Combined Workshop on Spatial …, 2019"],"snippet":"… Finally, we also use pre-trained GloVe embeddings on the Common Crawl (CC) dataset with 42B tokens4 … On multi-word test suite the P-vectors perform slightly better. On both test suites, GloVe trained on Common Crawl performs …","url":["https://www.aclweb.org/anthology/W19-1608"]} +{"year":"2019","title":"What are Links in Linked Open Data? A Characterization and Evaluation of Links between Knowledge Graphs on the Web","authors":["A Haller, JD Fernández, MR Kamdar, A Polleres - Working Papers on Information …, 2019"],"snippet":"Page 1. What are Links in Linked Open Data? A Characterization and Evaluation of Links between Knowledge Graphs on the Web Armin Haller, Javier D. Fernández, Maulik R. Kamdar, Axel Polleres Arbeitspapiere zum Tätigkeitsfeld …","url":["http://epub.wu.ac.at/7193/1/20191002ePub_LOD_link_analysis.pdf"]} +{"year":"2019","title":"What does Neural Bring? Analysing Improvements in Morphosyntactic Annotation and Lemmatisation of Slovenian, Croatian and Serbian","authors":["N Ljubešić, K Dobrovoljc - Proceedings of the 7th Workshop on Balto-Slavic …, 2019"],"snippet":"… neural morphosyntactic taggers, we also experiment with various embeddings, mostly (1) the original CoNLL 2017 word2vec (w2v) embeddings for Slovenian and Croatian (Ginter et al., 2017) (there are none available for …","url":["https://www.aclweb.org/anthology/W19-3704"]} +{"year":"2019","title":"Who Needs Words? Lexicon-Free Speech Recognition","authors":["T Likhomanenko, G Synnaeve, R Collobert - arXiv preprint arXiv:1904.04479, 2019"],"snippet":"… char GCNN-20B no 6.4 2.7 3.6 1.5 4https://github.com/facebookresearch/wav2letter 5Speaker adaptation; pronunciation lexicon 612k hours AM train set and common crawl LM 7Speaker adaptation; 3k acoustic states 8Data augmentation; n-gram LM …","url":["https://arxiv.org/pdf/1904.04479"]} +{"year":"2019","title":"WikiMatrix: Mining 135M Parallel Sentences in 1620 Language Pairs from Wikipedia","authors":["H Schwenk, V Chaudhary, S Sun, H Gong, F Guzmán - arXiv preprint arXiv …, 2019"],"snippet":"… recall. In this work, we chose the global mining op- tion. This will allow us to scale the same ap- proach to other, potentially huge, corpora for which document-level alignments are not easily available, eg Common Crawl. An …","url":["https://arxiv.org/pdf/1907.05791"]} +{"year":"2019","title":"WINOGRANDE: An Adversarial Winograd Schema Challenge at Scale","authors":["K Sakaguchi, RL Bras, C Bhagavatula, Y Choi - arXiv preprint arXiv:1907.10641, 2019"],"snippet":"… Ensemble Neural LMs Trinh and Le (2018) is one of the first attempts to apply a neural language model which is pre-trained on a very large corpora (including LM-1-Billion, CommonCrawl, SQuAD, and Gutenberg Books). In …","url":["https://arxiv.org/pdf/1907.10641"]} +{"year":"2019","title":"Word Embedding Based Extension of Text Categorization Topic Taxonomies","authors":["T Eljasik-Swoboda, F Engel, M Kaufmann, M Hemmje"],"snippet":"… ArgumenText is a practical implementation of an AM engine (Stab et al., 2018). It employs a two-step mechanism in which a large collection of documents (http://commoncrawl.org/, in Stab et al.'s experiment with 683 …","url":["http://ceur-ws.org/Vol-2348/paper01.pdf"]} +{"year":"2019","title":"Word Embedding Models for Query Expansion in Answer Passage Retrieval","authors":["S MASTER"],"snippet":"Page 1. MASTER'S THESIS Word Embedding Models for Query Expansion in Answer Passage Retrieval NIRMAL ROY Page 2. Page 3. Word Embedding Models for Query Expansion in Answer Passage Retrieval THESIS submitted …","url":["https://pdfs.semanticscholar.org/f436/c49151fd8d00c59655a939bbbd552f1577c4.pdf"]} +{"year":"2019","title":"Word Embedding Visualization Via Dictionary Learning","authors":["J Zhang, Y Chen, B Cheung, BA Olshausen - arXiv preprint arXiv:1910.03833, 2019"],"snippet":"… similar. For simplicity, we show the results for the 300 dimensional GloVe word vectors[30] pretrained on CommonCrawl [2]. We shall discuss the difference across different embedding models at the end in this section. Once …","url":["https://arxiv.org/pdf/1910.03833"]} +{"year":"2019","title":"Word Embeddings (Also) Encode Human Personality Stereotypes","authors":["O Agarwal, F Durupınar, NI Badler, A Nenkova - … of the Eighth Joint Conference on …, 2019"],"snippet":"… or profession. We experimented with GloVe representations (Pennington et al., 2014) trained on Common crawl (6B tokens, 400K vocab, 300d) and symmetric pattern (SP) based representations (Schwartz et al., 2015). We …","url":["https://www.aclweb.org/anthology/S19-1023"]} +{"year":"2019","title":"Word Embeddings and Gender Stereotypes in Swedish and English","authors":["R Précenth - 2019"],"snippet":"Page 1. UUDM Project Report 2019:15 Examensarbete i matematik, 30 hp Handledare: David Sumpter Examinator: Denis Gaidashev Maj 2019 Department of Mathematics Uppsala University Word Embeddings and Gender Stereotypes in Swedish and English …","url":["https://uu.diva-portal.org/smash/get/diva2:1313459/FULLTEXT01.pdf"]} +{"year":"2019","title":"Word Embeddings for Fine-Grained Sentiment Analysis","authors":["D Bacon, R Dalal, MRD Kodandarama, MR Hari…"],"snippet":"… Lastly, we considered the word embedding sub-model. We used the GLoVe word vectoring [11] trained on Common Crawl [https://commoncrawl.org/] as implemented by spaCy [7]. This resulted in a vector-dimension of 300 for each word …","url":["https://divatekodand.github.io/files/word_embeddings.pdf"]} +{"year":"2019","title":"Word Embeddings for Sentiment Analysis: A Comprehensive Empirical Survey","authors":["E Çano, M Morisio - arXiv preprint arXiv:1902.00753, 2019"],"snippet":"… This bundle contains data of Common Crawl (http: //commoncrawl.org/), a nonprofit organization that builds and maintains free and public text sets by crawling the Web. CommonCrawl42 is a highly reduced version easier and faster to work with …","url":["https://arxiv.org/pdf/1902.00753"]} +{"year":"2019","title":"Word Embeddings for the Armenian Language: Intrinsic and Extrinsic Evaluation","authors":["K Avetisyan, T Ghukasyan - arXiv preprint arXiv:1906.03134, 2019"],"snippet":"… A year later, Facebook released another batch of fastText embeddings, trained on Common Crawl and Wikipedia [2]. Other publicly available embeddings include 4 … these embeddings were trained on Wikipedia and Common Crawl, using CBOW architecture with …","url":["https://arxiv.org/pdf/1906.03134"]} +{"year":"2019","title":"Word Embeddings in Low Resource Gujarati Language","authors":["I Joshi, P Koringa, S Mitra - 2019 International Conference on Document Analysis …, 2019"],"snippet":"… (2014) released GloVe models trained on Wikipedia, Gigaword and Common Crawl (840B tokens). A notable effort is the work of Al-Rfou et al … Word embeddings for Gujarati language were released as a part of …","url":["https://ieeexplore.ieee.org/abstract/document/8893052/"]} +{"year":"2019","title":"Word Similarity Datasets for Thai: Construction and Evaluation","authors":["P Netisopakul, G Wohlgenannt, A Pulich - arXiv preprint arXiv:1904.04307, 2019"],"snippet":"… The models are trained on Common Crawl and Wikipedia corpora using fastText [13], regarding settings they report the us- age of the CBOW algorithm, 300 dimensions, a window size of 5 and 10 negatives. The model is large and contains 2M vectors …","url":["https://arxiv.org/pdf/1904.04307"]} +{"year":"2019","title":"Word Usage Similarity Estimation with Sentence Representations and Automatic Substitutes","authors":["AG Soler, M Apidianaki, A Allauzen - arXiv preprint arXiv:1905.08377, 2019"],"snippet":"… al., 2014). We use 300-dimensional GloVe embeddings pre-trained on Common Crawl (840B tokens).5 The representation of a sentence is obtained by averaging the GloVe embeddings of the words in the sentence. SIF (Smooth …","url":["https://arxiv.org/pdf/1905.08377"]} +{"year":"2019","title":"Word-embedding data as an alternative to questionnaires for measuring the affective meaning of concepts","authors":["A van Loon, J Freese - 2019"],"snippet":"… Here we include information from both algorithms. The GloVe embeddings we use have been trained on text obtained from Wikipedia, Twitter, and Common Crawl. The Word2vec embeddings we use are trained on the Google News Corpus …","url":["https://osf.io/preprints/socarxiv/r7ewx/download"]} +{"year":"2019","title":"Word-Embeddings and Grammar Features to Detect Language Disorders in Alzheimer's Disease Patients","authors":["JS Guerrero-Cristancho, JC Vásquez-Correa… - TecnoLógicas, 2020"],"snippet":"… occurrence in a document [13]. Said authors considered a pre-trained model with the Common Crawl dataset, whose vocabulary size exceeds the 2 million and contains 840 billion words. A logistic regression classifier and …","url":["https://revistas.itm.edu.co/index.php/tecnologicas/article/download/1387/1456"]} +{"year":"2019","title":"WTMED at MEDIQA 2019: A Hybrid Approach to Biomedical Natural Language Inference","authors":["Z Wu, Y Song, S Huang, Y Tian, F Xia - Proceedings of the 18th BioNLP Workshop …, 2019"],"snippet":"Page 1. Proceedings of the BioNLP 2019 workshop, pages 415–426 Florence, Italy, August 1, 2019. c 2019 Association for Computational Linguistics 415 WTMED at MEDIQA 2019: A Hybrid Approach to Biomedical Natural Language Inference …","url":["https://www.aclweb.org/anthology/W19-5044"]} +{"year":"2019","title":"X-WikiRE: A Large, Multilingual Resource for Relation Extraction as Machine Comprehension","authors":["M Abdou, C Sas, R Aralikatte, I Augenstein, A Søgaard - arXiv preprint arXiv …, 2019"],"snippet":"… All monolingual models' word embeddings were initialised using fastText embeddings trained on each language's Wikipedia and common crawl corpora,7 except for the comparison experiments described in sub-section …","url":["https://arxiv.org/pdf/1908.05111"]} +{"year":"2019","title":"XLNet: Generalized Autoregressive Pretraining for Language Understanding","authors":["Z Yang, Z Dai, Y Yang, J Carbonell, R Salakhutdinov… - arXiv preprint arXiv …, 2019"],"snippet":"Page 1. XLNet: Generalized Autoregressive Pretraining for Language Understanding Zhilin Yang∗1, Zihang Dai∗12, Yiming Yang1, Jaime Carbonell1, Ruslan Salakhutdinov1, Quoc V. Le2 1Carnegie Mellon University, 2Google …","url":["https://arxiv.org/pdf/1906.08237"]} +{"year":"2019","title":"YNU Wb at HASOC 2019: Ordered Neurons LSTM with Attention for Identifying Hate Speech and Offensive Language","authors":["B Wang, SL Yunxia Ding, X Zhou - Proceedings of the 11th annual meeting of the …, 2019"],"snippet":"… And the pre-training word vector we used is fastText, which is provided by Mikolov et al. [7]. It is a 2 million word vector trained using subword information on Common Crawl with 600B tokens, and its dimension is 300. 4.3 Result …","url":["http://ceur-ws.org/Vol-2517/T3-2.pdf"]} +{"year":"2019","title":"YNUWB at SemEval-2019 Task 6: K-max pooling CNN with average meta-embedding for identifying offensive language","authors":["B Wang, X Zhou, X Zhang - Proceedings of the 13th International Workshop on …, 2019"],"snippet":"… FastText is provided by Mikolov et al. (Mikolov et al., 2018), it is a 2 million word vector trained using subword information on Common Crawl with 600B tokens, and its dimension is 300. Glove is provided by Jeffrey Pennington et al …","url":["https://www.aclweb.org/anthology/S19-2143"]} +{"year":"2019","title":"Zastosowania metody rzutu przypadkowego w głębokich sieciach neuronowych","authors":["PI Wójcik"],"snippet":"Page 1. Akademia Górniczo-Hutnicza im. Stanisława Staszica w Krakowie Wydział Informatyki, Elektroniki i Telekomunikacji Katedra Informatyki Rozprawa doktorska Zastosowania metody rzutu przypadkowego w głębokich …","url":["http://www.doktoraty.iet.agh.edu.pl/_media/2018:pwojcik:phd.pdf"]} +{"year":"2019","title":"Zero-Resource Cross-Lingual Named Entity Recognition","authors":["MS Bari, S Joty, P Jwalapuram - arXiv preprint arXiv:1911.09812, 2019"],"snippet":"… We use FastText embeddings (Grave et al. 2018), which are trained on Common Crawl and Wikipedia, and SGD with a gradient clipping of 5.0 to train the model. We found that the learning rate was crucial for training, and …","url":["https://arxiv.org/pdf/1911.09812"]} +{"year":"2019","title":"Zero-Resource Neural Machine Translation with Monolingual Pivot Data","authors":["A Currey, K Heafield"],"snippet":"… We use all available parallel corpora for EN↔DE (Europarl v7, Common Crawl, and News Commentary v11) and for EN↔RU (Common Crawl, News Commentary v11, Yandex Corpus, and Wiki Headlines) to train the initial …","url":["https://kheafield.com/papers/edinburgh/pivot.pdf"]} +{"year":"2019","title":"Zero-shot Learning and Knowledge Transfer in Music Classification and Tagging","authors":["J Choi, J Lee, J Park, J Nam - arXiv preprint arXiv:1906.08615, 2019"],"snippet":"… We utilized a pretrained GloVe model available online. It contains 19 million vocabularies with 300 dimensional embedding trained from documents in Common Crawl data. We then evaluated the model on MTAT and GTZAN …","url":["https://arxiv.org/pdf/1906.08615"]} +{"year":"2019","title":"Zero-Shot Question Classification Using Synthetic Samples","authors":["H Fu, C Yuan, X Wang, Z Sang, S Hu, Y Shi - 2018 5th IEEE International Conference …, 2019"],"snippet":"… The detailed data set is listed in Table 1. All experiments follow the principle of counterpart parameters. The Chinese and English word vectors are pre-trained using Glove respectively on Samsung and Common Crawl corpus. The word dimension is 300 …","url":["https://ieeexplore.ieee.org/abstract/document/8691209/"]} +{"year":"2019","title":"Zero-Shot Semantic Segmentation via Variational Mapping","authors":["N Kato, T Yamasaki, K Aizawa - Proceedings of the IEEE International Conference on …, 2019"],"snippet":"… Dataset Unseen classes PASCAL-50 aeroplane, bicycle, bird, boat, bottle PASCAL-51 bus, car, cat, chair, cow PASCAL-52 diningtable, dog, horse, motorbike, person PASCAL-53 potted plant, sheep, sofa, train, tv/monitor …","url":["http://openaccess.thecvf.com/content_ICCVW_2019/papers/MDALC/Kato_Zero-Shot_Semantic_Segmentation_via_Variational_Mapping_ICCVW_2019_paper.pdf"]}