diff --git "a/2018.jsonl" "b/2018.jsonl" new file mode 100644--- /dev/null +++ "b/2018.jsonl" @@ -0,0 +1,533 @@ +{"year":"2018","title":"“I think it might help if we multiply, and not add”: Detecting Indirectness in Conversation","authors":["P Goel, Y Matsuyama, M Madaio, J Cassell"],"snippet":"… We tried various available pre-trained models like Twitter word2vec [14] trained on 400 million Twitter tweets, GloVe representations [34] trained on Wikipedia articles (called GloVe wiki) and web content crawled via …","url":["http://articulab.hcii.cs.cmu.edu/wordpress/wp-content/uploads/2018/04/Goel-IWSDS2018_camera-ready_13Mar.pdf"]} +{"year":"2018","title":"A Bi-Encoder LSTM Model for Learning Unstructured Dialogs","authors":["D Shekhar - 2018"],"snippet":"Page 1. University of Denver Digital Commons @ DU Electronic Theses and Dissertations Graduate Studies 8-1-2018 A Bi-Encoder LSTM Model for Learning Unstructured Dialogs Diwanshu Shekhar University of Denver Follow …","url":["https://digitalcommons.du.edu/cgi/viewcontent.cgi?article=2508&context=etd"]} +{"year":"2018","title":"A Bidirectional LSTM-CRF Network with Subword Representations, Character Convolutions and Morphosyntactic Features for Named Entity Recognition in Polish","authors":["M Piotrowski, W Janowski, P Pezik - Proceedings ofthePolEval2018Workshop"],"snippet":"… Chiu and Nichols 2015, Ma and Hovy 2016). Secondly, the input token sequences are matched against 300-dimensional FastText subword embeddings vectors derived from a Common Crawl dump of Polish texts (Grave et al …","url":["http://poleval.pl/files/poleval2018.pdf#page=93"]} +{"year":"2018","title":"A Case Study of Closed-Domain Response Suggestion with Limited Training Data","authors":["L Galke, G Gerstenkorn, A Scherp"],"snippet":"… and return the known answer to this context. To compute the word vector centroids, we use German fastText [16] word vectors trained on CommonCrawl and Wikipedia6 [17]. For the experiment on English utterances, we use …","url":["http://www.lpag.de/p/response_suggestion.pdf"]} +{"year":"2018","title":"A Cognitive Assistant for Risk Identification and Modeling","authors":["D Subramanian, D Bhattachrajya, RR Torrado…"],"snippet":"… politics/government or alternative ways to describe the same risk. Specifically, we use Glove [17] trained on a Common Crawl corpus that has 1.9 million vocabulary words embedded in a 300-dimensional vector space. Table II …","url":["http://ieeexplore.ieee.org/iel7/8241556/8257893/08258091.pdf"]} +{"year":"2018","title":"A Comparative Analysis of Content-based Geolocation in Blogs and Tweets","authors":["K Pappas, M Azab, R Mihalcea - arXiv preprint arXiv:1811.07497, 2018"],"snippet":"… Page 13. We also experiment with a word embedding representation, where we use word vectors obtained with GloVe [47] trained on a Common Crawl and a Twitter dataset respectively, which are added up and averaged to create a word …","url":["https://arxiv.org/pdf/1811.07497"]} +{"year":"2018","title":"A Comparative Study of Embedding Models in Predicting the Compositionality of Multiword Expressions","authors":["N Nandakumar, B Salehi, T Baldwin - Australasian Language Technology Association …"],"snippet":"… The two character-level embedding models we experiment with are fastText (Bojanowski et al., 2017) and ELMo (Peters et al., 2018), as detailed below. fastText We used the 300-dimensional model pre-trained on Common Crawl and Wikipedia us- ing CBOW …","url":["http://alta2018.alta.asn.au/alta2018-draft-proceedings.pdf#page=81"]} +{"year":"2018","title":"A Comparison of Machine Translation Paradigms for Use in Black-Box Fuzzy-Match Repair","authors":["R Knowles, JE Ortega, P Koehn"],"snippet":"… We report scores with a beam size of 12. 4.3 Phrase-Based SMT (Moses) We use Moses (Koehn et al., 2007) to train our phrasebased statistical MT (SMT) system using the same parallel text as the NMT model, with the addition of Common Crawl,10 for phrase extraction …","url":["https://www.researchgate.net/profile/John_Ortega3/publication/324007613_A_Comparison_of_Machine_Translation_Paradigms_for_Use_in_Black-Box_Fuzzy-Match_Repair/links/5ab8cd010f7e9b68ef51fa23/A-Comparison-of-Machine-Translation-Paradigms-for-Use-in-Black-Box-Fuzzy-Match-Repair.pdf"]} +{"year":"2018","title":"A Comprehensive Course on Big Data for Undergraduate Students","authors":["JA Shamsi, SZ ul Hassan, N Bawany, N Shoaib"],"snippet":"… ' TABLE V PROJECTS S. No Project Title Big Data Analytics 1 Performance Analysis on EspnCricinfo data 2 YouTube Data Analysis Using Hadoop 3 Analysis of emails in Amazon Common Crawl Dataset using MapReduce …","url":["https://grid.cs.gsu.edu/~tcpp/curriculum/sites/default/files/paper%201_0.pdf"]} +{"year":"2018","title":"A course on big data analytics","authors":["J Eckroth - Journal of Parallel and Distributed Computing, 2018"],"snippet":"This report details a course on big data analytics designed for undergraduate junior and senior computer science students. The course is heavily focused on proj.","url":["https://www.sciencedirect.com/science/article/pii/S0743731518300972"]} +{"year":"2018","title":"A database of German definitory contexts from selected web sources.","authors":["A Barbaresi, L Lemnitzer, A Geyken - LREC, 2018"],"snippet":"… specific datasets, eg instructional texts compiled for teaching purposes (Borg, 2009); large encyclopedic re- sources such as Wikipedia (Kovár et al., 2016); or generalpurpose web corpora (Navigli et al., 2010); unfiltered …","url":["https://hal.archives-ouvertes.fr/hal-01798704/document"]} +{"year":"2018","title":"A Dataset and Reranking Method for Multimodal MT of User-Generated Image Captions","authors":["S Schamoni, J Hitschler, S Riezler"],"snippet":"… pairs: French-English We trained our French-English translation system on data made available for the WMT 2015 translation shared task.7 We used the Europarl, News Commentary and Common Crawl data for training. Russian …","url":["http://www.cl.uni-heidelberg.de/statnlpgroup/papers/AMTA2018a.pdf"]} +{"year":"2018","title":"A Dataset for Web-scale Knowledge Base Population","authors":["M Glass, A Gliozzo"],"snippet":"… Obtain the list of WARC paths from the Common Crawl website – Download the WARC files hosted … 8 http://commoncrawl.org 9 https://github.com/optimaize/language-detector Page 10. 4.2 DBpedia DBpedia [1] is a mature knowledge base built from the infoboxes of Wikipedia …","url":["https://2018.eswc-conferences.org/wp-content/uploads/2018/02/ESWC2018_paper_173.pdf"]} +{"year":"2018","title":"A Deep Learning Approach for Extracting Attributes of ABAC Policies","authors":["M Alohaly, H Takabi, E Blanco - Proceedings of the 23nd ACM on Symposium on …, 2018"],"snippet":"… IBM 2004. Course Registration Requirements. (2004). 3. 2017. Common Crawl. (September 2017). http://commoncrawl.org/. 4. Ryma Abassi, Michael Rusinowitch, Florent Jacquemard, and Sihem Guemara El Fatmi. 2010. XML …","url":["https://dl.acm.org/citation.cfm?id=3205984"]} +{"year":"2018","title":"A Deep Learning Architecture for De-identification of Patient Notes: Implementation and Evaluation","authors":["K Khin, P Burckhardt, R Padman - arXiv preprint arXiv:1810.01570, 2018"],"snippet":"… embeddings. The traditional word embeddings use the latest GloVe 3 [13] pre-trained word vectors that were trained on the Common Crawl with about 840 billion tokens. For every token input, ti,j,k, the GloVe 6 Page 7. system …","url":["https://arxiv.org/pdf/1810.01570"]} +{"year":"2018","title":"A Discriminative Latent-Variable Model for Bilingual Lexicon Induction","authors":["S Ruder, R Cotterell, Y Kementchedjhieva, A Søgaard - arXiv preprint arXiv …, 2018"],"snippet":"Page 1. A Discriminative Latent-Variable Model for Bilingual Lexicon Induction Sebastian Ruder @,H∗ Ryan Cotterell S,P∗ Yova Kementchedjhieva Z Anders Søgaard Z @ Insight Research Centre, National University of Ireland …","url":["https://arxiv.org/pdf/1808.09334"]} +{"year":"2018","title":"A Document Descriptor using Covariance of Word Vectors","authors":["M Torki"],"snippet":"… We tested the available different dimensionality 100, 200 and 300. We also used the 300 dimensions GloVe model that used commoncrawl with 42 Billion tokens We call the last one Lrg. This model provides word vectors of 300 dimensions for each word …","url":["https://www.researchgate.net/profile/Marwan_Torki/publication/325678643_A_Document_Descriptor_using_Covariance_of_Word_Vectors/links/5b1d9b5345851587f29f58a2/A-Document-Descriptor-using-Covariance-of-Word-Vectors.pdf"]} +{"year":"2018","title":"A Domain is only as Good as its Buddies: Detecting Stealthy Malicious Domains via Graph Inference","authors":["IM Khalil, B Guan, M Nabeel, T Yu - 2018"],"snippet":"… Thales scans DNS using a set of seed domain list compiled from multiple sources, including public blacklists (eg [19, 40, 47]), the Alexa list [13], the Common Crawl dataset [3], the domain feed from an undisclosed security","url":["https://www.researchgate.net/profile/Issa_Khalil/publication/323393061_A_Domain_is_only_as_Good_as_its_Buddies_Detecting_Stealthy_Malicious_Domains_via_Graph_Inference/links/5a93a507a6fdccecff05aa6f/A-Domain-is-only-as-Good-as-its-Buddies-Detecting-Stealthy-Malicious-Domains-via-Graph-Inference.pdf"]} +{"year":"2018","title":"A flexible space-time tradeoff on hybrid index with bicriteria optimization","authors":["X Song, Y Yang, Y Jiang - Tsinghua Science and Technology, 2019"],"snippet":"… sorting. Page 3. 108 Tsinghua Science and Technology, February 2019, 24(1): 106–122 (4) We execute a detailed experimental evaluation on two realistic datasets, GOV2 and Common Crawl, with AOL query log. Analysis shows …","url":["https://ieeexplore.ieee.org/iel7/5971803/8526498/08526502.pdf"]} +{"year":"2018","title":"A Framework to Discover Significant Product Aspects from E-commerce Product Reviews","authors":["S Indrakanti, G Singh - 2018"],"snippet":"… ranking. 3.4.3 Synonym-based Clustering. We use 300-dimensional word embeddings for one million vocabulary entries trained on the Common Crawl corpus using the GloVe algorithm [18] to compute word similarities. Synonym …","url":["https://sigir-ecom.github.io/ecom18Papers/paper19.pdf"]} +{"year":"2018","title":"A Genetic Algorithm for Combining Visual and Textual Embeddings Evaluated on Attribute Recognition","authors":["R Li, G Collell, MF Moens"],"snippet":"… Following Collell and Moens (2016), we employ 300dimensional GloVe vectors (Pennington et al., 2014) trained on the largest available corpus (840B tokens and a 2.2M words vocabulary from Common Crawl corpus) …","url":["http://ceur-ws.org/Vol-2226/paper7.pdf"]} +{"year":"2018","title":"A Geo-Tagging Framework for Address Extraction from Web Pages","authors":["J Efremova, I Endres, I Vidas, O Melnik - Industrial Conference on Data Mining, 2018"],"snippet":"… Common Crawl is a public corpus, mostly stored on Amazon Web Services 3 . A subset of the CommonCrawl dataset has schema information in the microdata format with manual annotation of the main content such as …","url":["https://link.springer.com/chapter/10.1007/978-3-319-95786-9_22"]} +{"year":"2018","title":"A Graph-to-Sequence Model for AMR-to-Text Generation","authors":["L Song, Y Zhang, Z Wang, D Gildea - arXiv preprint arXiv:1805.02473, 2018"],"snippet":"… 5.2 Settings We extract a vocabulary from the training set, which is shared by both the encoder and the de- coder. The word embeddings are initialized from Glove pretrained word embeddings (Pennington et al., 2014) on …","url":["https://arxiv.org/pdf/1805.02473"]} +{"year":"2018","title":"A Helping Hand: Transfer Learning for Deep Sentiment Analysis","authors":["X Dong, G Melo - Proceedings of the 56th Annual Meeting of the …, 2018"],"snippet":"… en es de ru α 0.0004 0.0008 0.003 0.003 cs it ja α 0.003 0.003 0.003 Common Crawl data1, while for other languages, we rely on the Facebook fastText Wikipedia em- beddings (Bojanowski et al., 2016) as input representations. All of these are 300-dimensional …","url":["http://www.aclweb.org/anthology/P18-1235"]} +{"year":"2018","title":"A Hybrid Deep Learning System for Machine Comprehension","authors":["G Wu"],"snippet":"… and therefore can suffer from the out-of-vocabulary words. Thus, I used glove840B300d Common Crawl as the word embedding database during my training. e Apply dropout and regularization: dropout and regularization help …","url":["http://web.stanford.edu/class/cs224n/reports/6879317.pdf"]} +{"year":"2018","title":"A Machine Learning Approach to Correlate Emotional Intelligence and Happiness Based on Twitter Data","authors":["SS Shravani, NK Jha, R Guha - 2018"],"snippet":"… The corpus in this case was the common crawl dataset [Mikolov, 2017] and the word vectors were obtained using fast text algorithm of Facebook AI research (FAIR) which gave us around 2 million word vectors. The data set …","url":["http://hci2018.bcs.org/prelim_proceedings/papers/Work-in-Progress%20Track/BHCI-2018_paper_115.pdf"]} +{"year":"2018","title":"A Multi-layer LSTM-based Approach for Robot Command Interaction Modeling","authors":["M Mensio, E Bastianelli, I Tiddi, G Rizzo - arXiv preprint arXiv:1811.05242, 2018"],"snippet":"… for word representation,” in Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), 2014, pp. 1532–1543. [23] (2012) Common crawl. [Online]. Available: http://commoncrawl.org/","url":["https://arxiv.org/pdf/1811.05242"]} +{"year":"2018","title":"A Multi-task Ensemble Framework for Emotion, Sentiment and Intensity Prediction","authors":["MS Akhtar, D Ghosal, A Ekbal, P Bhattacharyya - arXiv preprint arXiv:1808.01216, 2018"],"snippet":"… Figure 1: Proposed Multi-task framework. 3.1 Deep Learning Models We employ the architecture of Figure 1a to train and tune all the deep learning models using pretrained GloVe (common crawl 840 billion) word embeddings (Pennington et al., 2014) …","url":["https://arxiv.org/pdf/1808.01216"]} +{"year":"2018","title":"A Multilayer Convolutional Encoder-Decoder Neural Network for Grammatical Error Correction","authors":["S Chollampatt, HT Ng - arXiv preprint arXiv:1801.08831, 2018"],"snippet":"… Grundkiewicz 2016) employs a word-level SMT approach with task-specific features and a web-scale LM trained on the Common Crawl corpus … all subsidiary models that make use of additional English corpora such as the word-class LM and the web-scale Common Crawl LM …","url":["https://arxiv.org/pdf/1801.08831"]} +{"year":"2018","title":"A Multimodal Assessment Framework for Integrating Student Writing and Drawing in Elementary Science Learning","authors":["PAM Smith, S Leeman-Munk, A Shelton, BW Mott… - IEEE Transactions on …, 2018"],"snippet":"Page 1. 1939-1382 (c) 2018 IEEE. Personal use is permitted, but republication/ redistribution requires IEEE permission. See http://www.ieee.org/ publications_standards/publications/rights/index.html for more information. This …","url":["http://ieeexplore.ieee.org/abstract/document/8274912/"]} +{"year":"2018","title":"A novel approach for phishing emails real time classifica-tion using k-means algorithm","authors":["V Mhaske-Dhamdhere, S Vanjale - International Journal of Engineering & …, 2017"],"snippet":"… to get to the two techniques utilizes RF (Random Forest) and LSTM (a long/here and now memory mastermind on datasets phish tank and Common Crawl, which gives result as precision rate of 93.5% and 98.7% .RF and LSTM utilizes 14 highlights of lexical and quantifiable …","url":["https://forum.sciencepubco.com/index.php/ijet/article/download/9018/3069"]} +{"year":"2018","title":"A novel approach for phishing emails real time classification using k-means algorithm","authors":["V Mhaske-Dhamdhere, S Vanjale - International Journal of Engineering & …, 2018"],"snippet":"… to get to the two techniques utilizes RF (Random Forest) and LSTM (a long/here and now memory mastermind on datasets phish tank and Common Crawl, which gives result as precision rate of 93.5% and 98.7% .RF and …","url":["http://bvucoepune.edu.in/wp-content/uploads/2018/BVUCOEP-DATA/Research_Publications/2017_18/168.pdf"]} +{"year":"2018","title":"A Pragmatic Guide to Geoparsing Evaluation","authors":["M Gritta, MT Pilehvar, N Collier - arXiv preprint arXiv:1810.12368, 2018"],"snippet":"… It 11 https://cloud.google.com/natural-language/ 12 https://spacy.io/usage/linguistic-features 13 https://github.com/jiesutd/NCRFpp 14 Common Crawl 42B - https://nlp.stanford.edu/ projects/glove/ Page 17. A Pragmatic Guide to Geoparsing Evaluation 17 …","url":["https://arxiv.org/pdf/1810.12368"]} +{"year":"2018","title":"A QUANTITATIVE ANALYSIS OF THE USE OF MICRODATA FOR SEMANTIC ANNOTATIONS ON EDUCATIONAL RESOURCES","authors":["R NAVARRETE, S LUJÁN-MORA - Journal of Web Engineering, 2018"],"snippet":"… This quantitative analysis was conducted on datasets extracted from the Common Crawl Corpus [17], as it is the largest corpus of web crawl … The CCF periodically releases the web corpus results from each crawling process for public use. This is the Common Crawl Corpus …","url":["http://www.rintonpress.com/xjwe17/jwe-17-12/045-072.pdf"]} +{"year":"2018","title":"A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings","authors":["M Artetxe, G Labaka, E Agirre - arXiv preprint arXiv:1805.06297, 2018"],"snippet":"… EnglishSpanish. More concretely, the dataset consists of 300-dimensional CBOW embeddings trained on WacKy crawling corpora (English, Italian, German), Common Crawl (Finnish) and WMT News Crawl (Spanish). The …","url":["https://arxiv.org/pdf/1805.06297"]} +{"year":"2018","title":"A Simple Machine Learning Method for Commonsense Reasoning? A Short Commentary on Trinh & Le (2018)","authors":["WS Saba - arXiv preprint arXiv:1810.00521, 2018"],"snippet":"… the suitcase”. T&L then compute, against the backdrop of training on a large corpus (trained on the language model LM-1-Billion, CommonCrawl, and SQuAD), the probabilities of s1 and s2 appearing in a large corpus. The …","url":["https://arxiv.org/pdf/1810.00521"]} +{"year":"2018","title":"A Simple Method for Commonsense Reasoning","authors":["TH Trinh, QV Le - arXiv preprint arXiv:1806.02847, 2018"],"snippet":"… We train an array of large RNN language models that operate at word or character level on LM-1-Billion, CommonCrawl, SQuAD, Gutenberg Books, and a customized corpus for this task and show that diversity of training …","url":["https://arxiv.org/pdf/1806.02847"]} +{"year":"2018","title":"A Social Robot System for Modeling Children's Word Pronunciation: Socially Interactive Agents Track","authors":["S Spaulding, H Chen, S Ali, M Kulinski, C Breazeal - Proceedings of the 17th …, 2018"],"snippet":"… 3) The semantic distance between two words is the cosine distance between their word-embedding vector representation. We used word-embedding vectors from the pre-trained Common Crawl GloVe vectors [27], included with the SpaCy Python package …","url":["https://dl.acm.org/citation.cfm?id=3237946"]} +{"year":"2018","title":"A survey on optimal utilization of preemptible VM instances in cloud computing","authors":["AK Mishra, BK Umrao, DK Yadav - The Journal of Supercomputing, 2018"],"snippet":"Page 1. The Journal of Supercomputing https://doi.org/10.1007/s11227-018-2509-0 A survey on optimal utilization of preemptible VM instances in cloud computing Ashish Kumar Mishra1 · Brajesh Kumar Umrao1 · Dharmendra K. Yadav1 …","url":["https://link.springer.com/article/10.1007/s11227-018-2509-0"]} +{"year":"2018","title":"A Transfer Learning Based Hierarchical Attention Neural Network for Sentiment Classification","authors":["Z Qu, Y Wang, X Wang, S Zheng - International Conference on Data Mining and Big …, 2018"],"snippet":"… The large one consists of 209772 sentences pairs. While training the LSTM encoder, we obtain the fixed English word vectors using the CommonCrawl-840B GloVe model. The size of hidden units is 300 in the LSTM. We use SGD for training …","url":["https://link.springer.com/chapter/10.1007/978-3-319-93803-5_36"]} +{"year":"2018","title":"A Vector Worth a Thousand Counts-A Temporal Semantic Similarity Approach to Patent Impact Prediction","authors":["DS Hain, R Jurowetzki, T Buchmann, P Wolf - 2018"],"snippet":"… task convolutional neural network model trained on OntoNotes, with GloVe vectors (685k unique vectors with 300 dimensions) trained on Common Crawl. Given a patent abstract, spaCy predicts the meaning of each term in the document …","url":["http://conference.druid.dk/acc_papers/tgct2l45t4lahofdzb2w9uvbpan4hq.pdf"]} +{"year":"2018","title":"Absolute Orientation for Word Embedding Alignment","authors":["S Dev, S Hassan, JM Phillips - arXiv preprint arXiv:1806.01330, 2018"],"snippet":"… For instance, some groups have built word vector embeddings for enormous datasets (eg, GloVe embedding using 840 billion tokens from Common Crawl, or the word2vec embedding using 100 billion tokens of Google News), which …","url":["https://arxiv.org/pdf/1806.01330"]} +{"year":"2018","title":"Abstractive Summarization of Reddit Posts with Multi-level Memory Networks","authors":["B Kim, H Kim, G Kim - arXiv preprint arXiv:1811.00783, 2018"],"snippet":"… 4.1 Text Embedding Online posts include lots of morphologically similar words, which should be closely embedded. Thus, we use the fastText (Bojanowski et al., 2016) trained on the Common Crawl corpus, to initialize the word embedding matrix Wemb …","url":["https://arxiv.org/pdf/1811.00783"]} +{"year":"2018","title":"Abstractive Summarization Using Attentive Neural Techniques","authors":["J Krantz, J Kalita - arXiv preprint arXiv:1810.08838, 2018"],"snippet":"… Page 5. similarity sub-score uses GloVe embeddings2 pretrained on Common Crawl while the dissimilarity sub-score uses Word2Vec3 trained on the Google News dataset. Using different word embeddings provides …","url":["https://arxiv.org/pdf/1810.08838"]} +{"year":"2018","title":"Accurate semantic textual similarity for cleaning noisy parallel corpora using semantic machine translation evaluation metric: The NRC supervised submissions to the …","authors":["C Lo, M Simard, D Stewart, S Larkin, C Goutte, P Littell - Proceedings of the Third …, 2018"],"snippet":"… 2011, 2012; Bojar et al., 2013) as the basis of our development set, as we believe that all the test sets in the previous years are clean and highly parallel, as opposed to the “clean” training data where glitches may occur (especially …","url":["http://www.aclweb.org/anthology/W18-6481"]} +{"year":"2018","title":"Achieving Human Parity on Automatic Chinese to English News Translation","authors":["H Hassan, A Aue, C Chen, V Chowdhary, J Clark… - arXiv preprint arXiv …, 2018"],"snippet":"Page 1. Achieving Human Parity on Automatic Chinese to English News Translation Hany Hassan∗, Anthony Aue, Chang Chen, Vishal Chowdhary, Jonathan Clark, Christian Federmann, Xuedong Huang, Marcin Junczys-Dowmunt …","url":["https://arxiv.org/pdf/1803.05567"]} +{"year":"2018","title":"Adapted TextRank for Term Extraction: A Generic Method of Improving Automatic Term Extraction Algorithms","authors":["Z Zhang, J Petrak, D Maynard"],"snippet":"… cosine similarity function between two vectors. For the word embeddings, we use the GloVe embeddings pre-trained on the general-purpose Common Crawl data2, but others could be investigated. For multi-word expressions, we …","url":["https://www.researchgate.net/profile/Ziqi_Zhang7/publication/326904600_A_Generic_Method_of_Improving_Automatic_Term_Extraction_Algorithms/links/5b6b3b73a6fdcc87df6da54b/A-Generic-Method-of-Improving-Automatic-Term-Extraction-Algorithms.pdf"]} +{"year":"2018","title":"Adaptive Semi-supervised Learning for Cross-domain Sentiment Classification","authors":["R He, WS Lee, HT Ng, D Dahlmeier - arXiv preprint arXiv:1809.00530, 2018"],"snippet":"… 4.4 Training Details and Hyper-parameters We initialize word embeddings using the 300dimension GloVe vectors supplied by Pennington et al., (2014), which were trained on 840 billion tokens from the Common Crawl. For …","url":["https://arxiv.org/pdf/1809.00530"]} +{"year":"2018","title":"Addressing Age-Related Bias in Sentiment Analysis","authors":["M Díaz, I Johnson, A Lazar, AM Piper, D Gergle - 2018"],"snippet":"… 400K words, uncased WG-6B-100D WG-6B-200D WG-6B-300D CC-42B-300D Common Crawl of the Internet 1.9M works, uncased CC-840B-300D 2.2M words, cased TW-27B-25D 2 billion Twitter tweets 1.2M words, uncased TW-27B-50D TW-27B-100D TW-27B-200D …","url":["http://www-users.cs.umn.edu/~joh12041/Publications/AgeBiasSentimentAnalysis_CHI18.pdf"]} +{"year":"2018","title":"Advances in Pre-Training Distributed Word Representations","authors":["T Mikolov, E Grave, P Bojanowski, C Puhrsch, A Joulin - arXiv preprint arXiv …, 2017"],"snippet":"… problems. Training such models on massive data sources, like Common Crawl, can be cumbersome and many NLP practitioners prefer to use publicly available pre-trained word vectors over training the models by themselves …","url":["https://arxiv.org/pdf/1712.09405"]} +{"year":"2018","title":"Adversarial Example Generation with Syntactically Controlled Paraphrase Networks","authors":["M Iyyer, J Wieting, K Gimpel, L Zettlemoyer - arXiv preprint arXiv:1804.06059, 2018"],"snippet":"… The pretrained Czech-English model used for translation came from the Nematus NMT system (Sennrich et al., 2017). The training data of this system includes four sources: Common Crawl, CzEng 1.6, Europarl, and News Commentary …","url":["https://arxiv.org/pdf/1804.06059"]} +{"year":"2018","title":"Adversarial Propagation and Zero-Shot Cross-Lingual Transfer of Word Vector Specialization","authors":["EM Ponti, I Vulić, G Glavaš, N Mrkšić, A Korhonen - arXiv preprint arXiv:1809.04163, 2018"],"snippet":"… Polyglot Wikipedia (Al-Rfou et al., 2013) using Skip-Gram with Negative Sampling (SGNS) (Mikolov et al., 2013) by Levy and Goldberg (2014) with bag-of- words contexts (window size is 2). 2) GLOVE-CC are GloVe vectors …","url":["https://arxiv.org/pdf/1809.04163"]} +{"year":"2018","title":"Aggression Identification Using Deep Learning and Data Augmentation","authors":["J Risch, R Krestel"],"snippet":"… More specifically, we use the common crawl embeddings5 for the English tasks and the Hindi Wikipedia embeddings6 for the Hindi tasks (Grave et al., 2018). In comparison to other embedding methods, fastText embeddings …","url":["https://hpi.de/fileadmin/user_upload/fachgebiete/naumann/publications/2018/Aggression_Identification_Using_Deep_Learning_and_Data_Augmentation.pdf"]} +{"year":"2018","title":"Aggressive Language Identification Using Word Embeddings and Sentiment Features","authors":["C Orasan - Proceedings of the First Workshop on Trolling …, 2018"],"snippet":"… representation of words. For the experiments presented in this paper, we used the pretrained word vectors obtained from the Common Crawl corpus containing 840 billion tokens and 2.2 million vocabulary entries. Each word …","url":["http://www.aclweb.org/anthology/W18-4414"]} +{"year":"2018","title":"Ain't Nobody Got Time For Coding: Structure-Aware Program Synthesis From Natural Language","authors":["J Bednarek, K Piaskowski, K Krawiec - arXiv preprint arXiv:1810.09717, 2018"],"snippet":"… Given that, we rely on a pretrained GloVe embedding, more specifically the Common Crawl, which has been trained on generic NL on 42B tokens by Pennington et al. (2014), has vocabulary size of 1.9M tokens, and embeds the words in a 300dimensional space …","url":["https://arxiv.org/pdf/1810.09717"]} +{"year":"2018","title":"Alibaba's Neural Machine Translation Systems for WMT18","authors":["Y Deng, S Cheng, J Lu, K Song, J Wang, S Wu, L Yao… - Proceedings of the Third …, 2018"],"snippet":"… For English ↔ Russian, we use the following resources from the WMT parallel data: News Commentary v13, CommonCrawl, ParaCrawl corpus … For EN → TR, about 6 million sentences are selected from the newscrawl2016, 2017 …","url":["http://www.aclweb.org/anthology/W18-6408"]} +{"year":"2018","title":"ALOD2Vec Matcher⋆","authors":["J Portisch, H Paulheim"],"snippet":"… like DBpedia [3] – but in- stead on the whole Web: The data set consists of hypernymy relations extracted from the Common Crawl1, a … 1 see http://commoncrawl.org/ 2 see http://webisa.webdatacommons.org/concept …","url":["http://dit.unitn.it/~pavel/om2018/papers/oaei18_paper3.pdf"]} +{"year":"2018","title":"Amortized Context Vector Inference for Sequence-to-Sequence Networks","authors":["S Chatzis, A Charalampous, K Tolias, SA Vassou - arXiv preprint arXiv:1805.09039, 2018"],"snippet":"Page 1. Amortized Context Vector Inference for Sequence-to-Sequence Networks Sotirios Chatzis Cyprus University of Technology sotirios. chatzis@cut.ac.cy Aristotelis Charalampous Cyprus University of …","url":["https://arxiv.org/pdf/1805.09039"]} +{"year":"2018","title":"AmritaNLP at SemEval-2018 Task 10: Capturing discriminative attributes using convolution neural network over global vector representation.","authors":["V Vinayan, KP Soman - Proceedings of The 12th International Workshop on …, 2018"],"snippet":"… ding (Pennington et al., 2014) of various dimensions are considered, which is learned over public data, available under the PDDL.2 (100, 300 di- mension word representation, embedded over 6B, 840B sizes common crawl corpus are considered) …","url":["http://www.aclweb.org/anthology/S18-1166"]} +{"year":"2018","title":"An Adaption of BIOASQ Question Answering dataset for Machine Reading systems by Manual Annotations of Answer Spans.","authors":["S Kamath, B Grau, Y Ma - Proceedings of the 6th BioASQ Workshop A challenge …, 2018"],"snippet":"… Several embedding spaces were tested as input vectors (Kamath et al., 2017) and the best performing ones which were the Glove embeddings trained on common crawl data with 840B tokens, were chosen as input to the system …","url":["http://www.aclweb.org/anthology/W18-5309"]} +{"year":"2018","title":"An Analysis of Hierarchical Text Classification Using Word Embeddings","authors":["RA Stein, PA Jaques, JF Valiati - Information Sciences, 2018"],"snippet":"Skip to main content …","url":["https://www.sciencedirect.com/science/article/pii/S0020025518306935"]} +{"year":"2018","title":"An Encoder-decoder Approach to Predicting Causal Relations in Stories","authors":["M Roemmele, AS Gordon"],"snippet":"… First, we encoded the words in each in- put segment as the sum of their GloVe embeddings3 (Pennington et al., 2014), which represent words according to a global log-bilinear regression model trained on word co-occurrence counts in the Common Crawl corpus …","url":["http://people.ict.usc.edu/~gordon/publications/NAACL-WS18A.PDF"]} +{"year":"2018","title":"An Experimental Analysis of Multi-Perspective Convolutional Neural Networks","authors":["Z Tu - 2018"],"snippet":"… nity to use. Popular pre-trained word vectors include word2vec trained on Google News, GloVe [31] trained on Common Crawl/Wikipedia/Twitter, and fastText [19] trained on Common Crawl/Wikipedia. Whereas word2vec captures …","url":["https://uwspace.uwaterloo.ca/bitstream/handle/10012/13297/Tu_Zhucheng.pdf?sequence=1"]} +{"year":"2018","title":"An Unsupervised Approach to Relatedness Analysis of Legal Language","authors":["Y Wang - 2018"],"snippet":"Page 1. An Unsupervised Approach to Relatedness Analysis of Legal Language by Ying Wang A thesis presented to the University of Waterloo in fulfillment of the thesis requirement for the degree of Master of Applied …","url":["https://uwspace.uwaterloo.ca/bitstream/handle/10012/13847/Wang_Ying.pdf?sequence=3&isAllowed=y"]} +{"year":"2018","title":"Analysis of DNS in cybersecurity","authors":["P Hudák"],"snippet":"Page 1. Masarykova univerzita Fakulta informatiky Analysis of DNS in cybersecurity Master's thesis Patrik Hudák Brno, 2017 Page 2. Page 3. Declaration Hereby I declare, that this paper is my original authorial work, which I have worked out by my own …","url":["https://is.muni.cz/th/byrdn/Thesis.pdf"]} +{"year":"2018","title":"Analysis of parallel iterative graph applications on shared memory systems","authors":["F Atik - 2018"],"snippet":"Page 1. ANALYSIS OF PARALLEL ITERATIVE GRAPH APPLICATIONS ON SHARED MEMORY SYSTEMS a thesis submitted to the graduate school of engineering and science of bilkent university in partial fulfillment of the requirements for the degree of master of science in …","url":["http://repository.bilkent.edu.tr/bitstream/handle/11693/35746/10178583.pdf?sequence=1"]} +{"year":"2018","title":"Analysis of Short Text Classification strategies using Out of-domain Vocabularies","authors":["D Roa - 2018"],"snippet":"Page 1. IN DEGREE PROJECT INFORMATION AND COMMUNICATION TECHNOLOGY, SECOND CYCLE, 30 CREDITS , STOCKHOLM SWEDEN 2018 Analysis of Short Text Classification strategies using Out- of-domain Vocabularies DIEGO ROA …","url":["http://www.diva-portal.org/smash/get/diva2:1259356/FULLTEXT01.pdf"]} +{"year":"2018","title":"Analysis of the Web Graph Aggregated by Host and Pay-Level Domain","authors":["A Funel - arXiv preprint arXiv:1802.05435, 2018"],"snippet":"… The web graph datasets, publicly available, have been released by the Common Crawl Foundation 1 and are based on a web crawl performed during the period May-June-July 2017 … 1http://commoncrawl.org … [7] from a crawl, provided by the Common Crawl Foundation, gath …","url":["https://arxiv.org/pdf/1802.05435"]} +{"year":"2018","title":"Analyzing conversations to automatically identify action items","authors":["R Raanani, R Levy, MY Breakstone, D Facher - US Patent App. 15/854,642, 2018"],"snippet":"… At the same time, natural language processing (NLP) approaches to both topic modeling and world-knowledge modeling, have become much more efficient due to the availability of large, freely accessible natural language corpora (eg, CommonCrawl), as well as freely …","url":["https://patents.google.com/patent/US20180122383A1/en"]} +{"year":"2018","title":"ANALYZING CONVERSATIONS TO AUTOMATICALLY IDENTIFY CUSTOMER PAIN POINTS","authors":["R Raanani, R Levy, MY Breakstone, D Facher - US Patent App. 15/902,808, 2018"],"snippet":"… At the same time, natural language processing (NLP) approaches to both topic modeling and world-knowledge modeling, have become much more efficient due to the availability of large, freely accessible natural language corpora (eg, CommonCrawl), as well as freely …","url":["http://www.freepatentsonline.com/y2018/0181561.html"]} +{"year":"2018","title":"Analyzing conversations to automatically identify deals at risk","authors":["R Raanani, R Levy, D Facher, MY Breakstone - US Patent App. 15/835,807, 2018"],"snippet":"… language processing (NLP) approaches to both topic modeling and i Insidesales.com “Market size 2013” study world-knowledge modeling, have become much more efficient due to the availability of large, freely accessible …","url":["https://patents.google.com/patent/US20180096271A1/en"]} +{"year":"2018","title":"ANALYZING CONVERSATIONS TO AUTOMATICALLY IDENTIFY PRODUCT FEATURE REQUESTS","authors":["R Raanani, R Levy, MY Beakstone, D Facher - US Patent App. 15/902,751, 2018"],"snippet":"… At the same time, natural language processing (NLP) approaches to both topic modeling and world-knowledge modeling, have become much more efficient due to the availability of large, freely accessible natural language corpora (eg, CommonCrawl), as well as freely …","url":["http://www.freepatentsonline.com/y2018/0183930.html"]} +{"year":"2018","title":"Analyzing conversations to automatically identify product features that resonate with customers","authors":["R Raanani, R Levy, MY Breakstone, D Facher - US Patent App. 15/937,494, 2018"],"snippet":"… At the same time, natural language processing (NLP) approaches to both topic modeling and world-knowledge modeling, have become much more efficient due to the availability of large, freely accessible natural language corpora (eg, CommonCrawl), as well as freely …","url":["https://patents.google.com/patent/US20180218733A1/en"]} +{"year":"2018","title":"Analyzing Uncertainty in Neural Machine Translation","authors":["M Ott, M Auli, D Granger, MA Ranzato - arXiv preprint arXiv:1803.00047, 2018"],"snippet":"Page 1. Analyzing Uncertainty in Neural Machine Translation Myle Ott Michael Auli David Granger Marc'Aurelio Ranzato Facebook AI Research Abstract Machine translation is a popular test bed for research in neural sequence …","url":["https://arxiv.org/pdf/1803.00047"]} +{"year":"2018","title":"Answering Factoid Questions with Recurrent Neural Networks","authors":["M Kim, K Paeng"],"snippet":"… one. The embedding matrix is initialized with 300-dimensional GloVe embeddings pre-trained on the 840B Common Crawl corpus [9]. We use case-sensitive embeddings, which results in ~140k usable GloVe embeddings …","url":["http://cs.umd.edu/~miyyer/data/answering-factoid-questions.pdf"]} +{"year":"2018","title":"Approaching Nested Named Entity Recognition with Parallel LSTM-CRFs","authors":["Ł Borchmann, A Gretkowski, F Gralinski - Proceedings ofthePolEval2018Workshop, 2018"],"snippet":"… 2014) were trained on a very large, freely available7 Common Crawl-based Web corpus of Polish (Buck et al. 2014) … N-gram Counts and Language Models from the Common Crawl.[in:] Proceedings of the Language Resources …","url":["http://poleval.pl/files/poleval2018.pdf#page=63"]} +{"year":"2018","title":"Approaching Neural Grammatical Error Correction as a Low-Resource Machine Translation Task","authors":["M Junczys-Dowmunt, R Grundkiewicz, S Guha… - arXiv preprint arXiv …, 2018"],"snippet":"… Current state-of-the-art GEC systems based on SMT, however, all include large-scale in- domain language models either following the steps outlined in Junczys-Dowmunt and Grundkiewicz (2016) or directly re-using their …","url":["https://arxiv.org/pdf/1804.05940"]} +{"year":"2018","title":"Approaching the largest 'API': extracting information from the Internet with Python","authors":["JE Germann"],"snippet":"… For every patron their dataset. For every dataset its patron. Bibliography. Common Crawl: Get Started[Internet]. [accessed 2017 Dec. 1] Available from http://commoncrawl.org/the-data/getstarted/. Imperva Incapsula's Bot Traffic Report [Internet]. [accessed 2017 Dec …","url":["http://journal.code4lib.org/articles/13197"]} +{"year":"2018","title":"Arabic Sentences Classification via Deep Learning","authors":["D Sagheer, F Sukkar"],"snippet":"… The research [21] presents AraVec Arabic word2vec models for the Arabic language using three different dataset resources: Wikipedia, Twitter and Common Crawl webpages crawl data, the models are built in the same strategies in Mikilov word2vec, skip-gram and CBOW …","url":["https://www.researchgate.net/profile/Dania_Sagheer/publication/326429992_Arabic_Sentences_Classification_via_Deep_Learning/links/5b5b5ecfa6fdccf0b2fa820a/Arabic-Sentences-Classification-via-Deep-Learning.pdf"]} +{"year":"2018","title":"Are Automatic Metrics Robust and Reliable in Specific Machine Translation Tasks?","authors":["M Chinea-Rios, A Peris, F Casacuberta"],"snippet":"… 5 Experimental setup Our experimental framework related a domain adaptation task, in the English to Spanish language direction. In our setup, we trained a PB-SMT and a NMT system on the same data, from a general corpus extracted from websites (Common Crawl) …","url":["https://www.researchgate.net/profile/Alvaro_Peris/publication/325059351_Are_Automatic_Metrics_Robust_and_Reliable_in_Specific_Machine_Translation_Tasks/links/5af4002e4585157136c96459/Are-Automatic-Metrics-Robust-and-Reliable-in-Specific-Machine-Translation-Tasks.pdf"]} +{"year":"2018","title":"Are we experiencing the Golden Age of Automatic Post-Editing?","authors":["M Junczys-Dowmunt - Proceedings of the AMTA 2018 Workshop on …, 2018"],"snippet":"… EN-DE bilingual data from the WMT-16 shared tasks on IT and news translation. ▶ German monolingual Common Crawl (CC) corpus. Proceedings for AMTA 2018 Workshop: Translation Quality Estimation …","url":["http://www.aclweb.org/anthology/W18-2105"]} +{"year":"2018","title":"ArgumenText: Searching for Arguments in Heterogeneous Sources","authors":["C Stab, J Daxenberger, C Stahlhut, T Miller, B Schiller… - Proceedings of the 2018 …, 2018"],"snippet":"… 3.1 Data As our objective is to search for arguments in any text domain, we build upon the English part of CommonCrawl,1 the largest Web corpus available to date. Before further processing, we followed 1http://commoncrawl.org/ Habernal et al …","url":["http://www.aclweb.org/anthology/N18-5005"]} +{"year":"2018","title":"Arretium or Arezzo? A Neural Approach to the Identification of Place Names in Historical Texts","authors":["R Sprugnoli"],"snippet":"… summarised below: • dropout: 0.25, 0.25 • classifier: CRF • LSTM-Size: 100 • optimizer: NADAM • word embeddings: GloVe Common Crawl 840B • character … namely: (i) GloVe embeddings, trained on a corpus of 840 billion …","url":["http://ceur-ws.org/Vol-2253/paper26.pdf"]} +{"year":"2018","title":"Associative Multichannel Autoencoder for Multimodal Word Representation","authors":["S Wang, J Zhang, C Zong - Proceedings of the 2018 Conference on Empirical …, 2018"],"snippet":"… representations. 4 Experimental Setup 4.1 Datasets Textual vectors. We use 300-dimensional GloVe vectors1 which are trained on the Common Crawl corpus consisting of 840B tokens and a vocabulary of 2.2M words2. Visual vectors. Our …","url":["http://www.aclweb.org/anthology/D18-1011"]} +{"year":"2018","title":"AttnConvnet at SemEval-2018 Task 1: Attention-based Convolutional Neural Networks for Multi-label Emotion Classification","authors":["Y Kim, H Lee, K Jung - arXiv preprint arXiv:1804.00831, 2018"],"snippet":"… dataset. Among those well-known word embeddings such as Word2Vec( Mikolov et al., 2013), GloVe(Pennington et al., 2014), fastText(Piotr et al., 2016), we adopt 300-dimension GloVe vectors for English ,which is trained …","url":["https://arxiv.org/pdf/1804.00831"]} +{"year":"2018","title":"Automated Detection of Adverse Drug Reactions in the Biomedical Literature Using Convolutional Neural Networks and Biomedical Word Embeddings","authors":["DS Miranda - arXiv preprint arXiv:1804.09148, 2018"],"snippet":"… 5.1 Embeddings GloVe 840B As in Huynh's work (Huynh et al., 2016), we use pre-trained word embeddings. Huynh focused mainly on the general purpose GloVe Common Crawl 840B, 300 dimensional word embeddings (Pennington et al., 2014). Pyysalo's Embeddings …","url":["https://arxiv.org/pdf/1804.09148"]} +{"year":"2018","title":"Automated discovery of privacy violations on the web","authors":["ST Englehardt - 2018"],"snippet":"Page 1. Automated discovery of privacy violations on the web Steven Tyler Englehardt A Dissertation Presented to the Faculty of Princeton University in Candidacy for the Degree of Doctor of Philosophy Recommended for …","url":["https://senglehardt.com/papers/princeton_phd_dissertation_englehardt.pdf"]} +{"year":"2018","title":"Automatic generation of playlists from conversations","authors":["R Raanani, R Levy, MY Breadstone - US Patent App. 15/793,691, 2018"],"snippet":"… At the same time, natural language processing (NLP) approaches to both topic modeling and world-knowledge modeling, have become much more efficient due to the availability of large, freely accessible natural language corpora (eg, CommonCrawl), as well as freely …","url":["https://patents.google.com/patent/US20180046710A1/en"]} +{"year":"2018","title":"Automatic Neural Question Generation using Community-based Question Answering Systems","authors":["T Baghaee - 2017"],"snippet":"Page 1. AUTOMATIC NEURAL QUESTION GENERATION USING COMMUNITYBASED QUESTION ANSWERING SYSTEMS TINA BAGHAEE Bachelor of Science, Shahid Beheshti University, 2011 A Thesis Submitted to the …","url":["https://www.uleth.ca/dspace/bitstream/handle/10133/5004/Baghaee_Tina_MSC_2017.pdf?sequence=1&isAllowed=y"]} +{"year":"2018","title":"AUTOMATIC PATTERN RECOGNITION IN CONVERSATIONS","authors":["R Raanani, R Levy, D Facher, MY Breakstone - US Patent App. 15/817,490, 2018"],"snippet":"… language processing (NLP) approaches to both topic modeling and world-knowledge modeling, have become much more efficient due to the i Insidesales.com “Market size 2013” study availability of large, freely …","url":["http://www.freepatentsonline.com/y2018/0077286.html"]} +{"year":"2018","title":"Automatic Post-Editing of Machine Translation: A Neural Programmer-Interpreter Approach","authors":["TT Vu, G Haffari - Proceedings of the 2018 Conference on Empirical …, 2018"],"snippet":"… Interestingly, training MT+AG and MT+AG+LM models on 23K data lead to better TER/BLEU than those trained on 500K+12K. This implies the importance of in-domain training data, as the synthetic corpus is created …","url":["http://www.aclweb.org/anthology/D18-1341"]} +{"year":"2018","title":"Automatic Question Tagging with Deep Neural Networks","authors":["B Sun, Y Zhu, Y Xiao, R Xiao, YG Wei - IEEE Transactions on Learning Technologies, 2018"],"snippet":"… Word2vec [41] trained on a large corpus. Some pre-trained word vectors are available, such as GloVe Common Crawl vectors1 and word2vec vectors2, which is trained on Google News. The word vectors can be divided into …","url":["http://ieeexplore.ieee.org/abstract/document/8295250/"]} +{"year":"2018","title":"Automatically Categorizing Software Technologies","authors":["M Nassif, C Treude, M Robillard - IEEE Transactions on Software Engineering, 2018","S Khan, WH Butt - 2022 2nd International Conference on Digital Futures …, 2022"],"snippet":"… All these approaches work by mining large text corpora. Among the latest such techniques is the WebIsA Database [32] from the Web Data Commons project, which extracts hypernyms from CommonCrawl,1 a corpus of over …","url":["https://ieeexplore.ieee.org/abstract/document/8359344/","https://ieeexplore.ieee.org/abstract/document/9787457/"]} +{"year":"2018","title":"Based Speech Recognition with Gated ConvNets","authors":["V Liptchinsky, G Synnaeve, R Collobert - arXiv preprint arXiv:1712.09444, 2017"],"snippet":"… Extra Resources Panayotov et al. (2015) HMM+DNN+pNorm phone fMLLR phone lexicon Amodei et al. (2016) 2D-CNN+RNN letter none 11.9Kh train set, Common Crawl LM Peddinti et al. (2015b) HMM+CNN phone iVectors phone lexicon Povey et al …","url":["https://arxiv.org/pdf/1712.09444"]} +{"year":"2018","title":"Belittling the Source: Trustworthiness Indicators to Obfuscate Fake News on the Web","authors":["D Esteves, AJ Reddy, P Chawla, J Lehmann - arXiv preprint arXiv:1809.00494, 2018"],"snippet":"… Social Tags: returns the frequency of social tags in wb: R ⋃ i=1 ϕ(i, wb) 11. OpenSources: returns the open-source classification (x) for a given website: x = { 1, if w ∈ O 0, if w ∈ O 12. PageRankCC: PageRank information …","url":["https://arxiv.org/pdf/1809.00494"]} +{"year":"2018","title":"Bi-Directional Differentiable Input Reconstruction for Low-Resource Neural Machine Translation","authors":["X Niu, W Xu, M Carpuat - arXiv preprint arXiv:1811.01116, 2018"],"snippet":"… data for Swahili↔English (SW↔EN), Tagalog↔English (TL↔EN) and Somali↔English (SO↔EN) contains a mixture of domains such as news and weblogs and is collected from the IARPA MATERIAL program2, the Global …","url":["https://arxiv.org/pdf/1811.01116"]} +{"year":"2018","title":"Bi-Directional Neural Machine Translation with Synthetic Parallel Data","authors":["X Niu, M Denkowski, M Carpuat - arXiv preprint arXiv:1805.11213, 2018"],"snippet":"… Page 3. Type Dataset # Sentences High-resource: German↔English Training Common Crawl + Europarl v7 + News Comm. v12 … 2.2). We use the training data as in-domain and either Common Crawl or ICWSM as out-of-domain …","url":["https://arxiv.org/pdf/1805.11213"]} +{"year":"2018","title":"Big Data Integration for Product Specifications","authors":["L Barbosa, V Crescenzi, XL Dong, P Merialdo, F Piai… - Data Engineering, 2018"],"snippet":"… Only 20% of our sources contained fewer pages than the same sources in Common Crawl, and a very small fraction of the pages in these sources were product pages: on a sample set of 12 websites where Common Crawl …","url":["http://sites.computer.org/debull/A18june/A18JUN-CD.pdf#page=73"]} +{"year":"2018","title":"Big Open Research Data","authors":["A PRIMPELI"],"snippet":"… Outline ▪ Requirements of a data scientist ▪ Open access datasets ▪ DBpedia ▪ Common Crawl ▪ The Web Data Commons project ▪ Why should we share data? ▪ How should we share data? 26/10/2017 … 5 Page 6. Common Crawl …","url":["https://pdfs.semanticscholar.org/presentation/a576/94b9a19b82cc219ae01004e5c30cbeea916e.pdf"]} +{"year":"2018","title":"Bigrams and BiLSTMs Two neural networks for sequential metaphor detection","authors":["Y Bizzoni, M Ghanimifard - NAACL HLT 2018, 2018"],"snippet":"… GloVe embeddings on British National Corpus (Consortium et al., 2007) from which the VUAMC corpus was sampled, and compared it with both pre-trained Word2Vec em- beddings on Google News corpus and standard …","url":["http://www.cl.cam.ac.uk/~es407/papers/Fig-Lang2018-proceedings.pdf#page=103"]} +{"year":"2018","title":"Bilingual Character Representation for Efficiently Addressing Out-of-Vocabulary Words in Code-Switching Named Entity Recognition","authors":["GI Winata, CS Wu, A Madotto, P Fung - arXiv preprint arXiv:1805.12061, 2018"],"snippet":"… We use 300-dimensional English (Mikolov et al., 2018) and Spanish (Grave et al., 2018) FastText pre-trained word vectors which comprise two million words vocabulary each and they are trained using Common Crawl and Wikipedia …","url":["https://arxiv.org/pdf/1805.12061"]} +{"year":"2018","title":"Biomedical Domain-Oriented Word Embeddings via Small Background Texts for Biomedical Text Mining Tasks","authors":["L Li, J Wan, D Huang - National CCF Conference on Natural Language …, 2017"],"snippet":"… For example, Pennington et al. used Wikipedia, Giga word 5 and Common Crawl to learn word embeddings, each of which contained billions of tokens [15]. However, there is not always a monotonic increase in performance as the amount of background texts increase …","url":["https://link.springer.com/chapter/10.1007/978-3-319-73618-1_46"]} +{"year":"2018","title":"BlogSet-BR: A Brazilian Portuguese Blog Corpus","authors":["H Santos, V Woloszyn, R Vieira - … of the Eleventh International Conference on …, 2018"],"snippet":"… For instance, the Common Crawl project maintains an open repository of web crawl data that can be accessed and analyzed by any research group2. This corpus has been used to build language models (Roziewski …","url":["http://www.aclweb.org/anthology/L18-1105"]} +{"year":"2018","title":"BomJi at SemEval-2018 Task 10: Combining Vector-, Pattern-and Graph-based Information to Identify Discriminative Attributes","authors":["E Santus, C Biemann, E Chersoni"],"snippet":"… 2The pre-trained vectors are available, respectively, at https://code.google.com/archive/ p/ word2vec/ (Google News, 300 dimensions) and at https://nlp.stanford.edu/projects/ glove/ (Common Crawl, 840B tokens, 300 dimensions) …","url":["https://www.inf.uni-hamburg.de/en/inst/ab/lt/publications/2018-santusetal-semeval-bomji.pdf"]} +{"year":"2018","title":"Bootstrapping Multilingual Intent Models via Machine Translation for Dialog Automation","authors":["N Ruiz, S Bangalore, J Chen - arXiv preprint arXiv:1805.04453, 2018"],"snippet":"… The NMT models were trained with parallel English-Spanish data from Europarl v7, CommonCrawl, and WMT News Commentary v8 from the WMT 2013 evaluation campaign (Bojar et al., 2013), as well as the TED talks from IWSLT 2014 (Cettolo et al., 2014) …","url":["https://arxiv.org/pdf/1805.04453"]} +{"year":"2018","title":"Bringing Order to Neural Word Embeddings with Embeddings Augmented by Random Permutations (EARP)","authors":["A Sharp","T Cohen, D Widdows - Proceedings of the 22nd Conference on Computational …, 2018"],"snippet":"Page 1. Proceedings of the 22nd Conference on Computational Natural Language Learning (CoNLL 2018), pages 465–475 Brussels, Belgium, October 31 - November 1, 2018. c 2018 Association for Computational Linguistics 465 …","url":["http://www.aclweb.org/anthology/K18-1045","https://zdoc.pub/bringing-order-to-neural-word-embeddings-with-embeddings-aug.html"]} +{"year":"2018","title":"Bringing replication and reproduction together with generalisability in NLP: Three reproduction studies for Target Dependent Sentiment Analysis","authors":["A Moore, P Rayson - arXiv preprint arXiv:1806.05219, 2018"],"snippet":"… We found the best word vectors from SSWE and the common crawl 42B 300 dimension Glove vectors by five fold stratified cross validation for the NP methods and the highest accuracy on the validation set for the LSTM methods …","url":["https://arxiv.org/pdf/1806.05219"]} +{"year":"2018","title":"C) uestion Answering System with Deep Learning","authors":["JSRMI Schoenhals"],"snippet":"… The dataset contains more than 100k question-answer pairs on more than 500 articles, and it is split into 90k/10k train/dev question-context tuples with hidden test set. 3.2 Word Embeddings We use pre-trained GLoVe embeddings from the 840B Common Crawl corpus …","url":["http://web.stanford.edu/class/cs224n/reports/6908933.pdf"]} +{"year":"2018","title":"CAESAR: Context Awareness Enabled Summary-Attentive Reader","authors":["LH Chen, K Tripathi - arXiv preprint arXiv:1803.01335, 2018"],"snippet":"… Aftering experimenting with hyperparameters and various preprocessing settings, we settle on the following experiment details which gave the optimal result. We apply pretrained GloVe word embeddings trained on common","url":["https://arxiv.org/pdf/1803.01335"]} +{"year":"2018","title":"Can a Suit of Armor Conduct Electricity? A New Dataset for Open Book Question Answering","authors":["T Mihaylov, P Clark, T Khot, A Sabharwal"],"snippet":"… 11For all experiments we use d = 300 GloVe (Pennington et al., 2014) embeddings pre-trained on 840B tokens from Common Crawl (https://nlp.stanford.edu/projects/glove/). Page 7. Question Science Fact Common Knowledge (Type) Reasoning Challenge …","url":["http://ai2-website.s3.amazonaws.com/publications/Mihaylov-OpenBookQA-emnlp2018.pdf"]} +{"year":"2018","title":"Can Common Crawl reliably track persistent identifier (PID) use over time?","authors":["HS Thompson, J Tong - arXiv preprint arXiv:1802.01424, 2018"],"snippet":"Abstract: We report here on the results of two studies using two and four monthly web crawls respectively from the Common Crawl (CC) initiative between 2014 and 2017, whose initial goal was to provide empirical evidence for the changing patterns of use of so-called","url":["https://arxiv.org/pdf/1802.01424"]} +{"year":"2018","title":"Can Network Embedding of Distributional Thesaurus be Combined with Word Vectors for Better Representation?","authors":["A Jana, P Goyal - arXiv preprint arXiv:1802.06196, 2018"],"snippet":"… representation. For that purpose, we directly use very wellknown GloVe 1.2 embeddings (Pennington et al., 2014) trained on 840 billion words of the common crawl dataset having vector dimension of 300. As an instance …","url":["https://arxiv.org/pdf/1802.06196"]} +{"year":"2018","title":"Card-660: A Reliable Evaluation Framework for Rare Word Representation Models","authors":["MT Pilehvar, D Kartsaklis, V Prokhorov, N Collier - Proceedings of the 2018 …, 2018"],"snippet":"… Glove Wikipedia-Gigaword (300d) 400K 7% 55% 12% 74% 34.9 15.1 34.4 15.7 Glove Common Crawl - uncased (300d) 1.9M 1% 36% 1% 50% 36.5 29.2 37.7 27.6 Glove Common Crawl - cased (300d) 2.2M 1% 29% 2% 44% 44.0 33.0 45.1 27.3 …","url":["http://www.aclweb.org/anthology/D18-1169"]} +{"year":"2018","title":"Card-660: Cambridge Rare Word Dataset-a Reliable Benchmark for Infrequent Word Representation Models","authors":["MT Pilehvar, D Kartsaklis, V Prokhorov, N Collier - arXiv preprint arXiv:1808.09308, 2018"],"snippet":"… Glove Wikipedia-Gigaword (300d) 400K 7% 55% 12% 74% 34.9 15.1 34.4 15.7 Glove Common Crawl - uncased (300d) 1.9M 1% 36% 1% 50% 36.5 29.2 37.7 27.6 Glove Common Crawl - cased (300d) 2.2M 1% 29% 2% 44% 44.0 33.0 45.1 27.3 …","url":["https://arxiv.org/pdf/1808.09308"]} +{"year":"2018","title":"Categorization of Comparative Sentences for Argument Mining","authors":["M Franzek, A Panchenko, C Biemann - arXiv preprint arXiv:1809.06152, 2018"],"snippet":"… The re- maining objects were combined to pairs. For each object type as given by the Wikipedia List page or the seed word, all possible combinations were created. We drew sentences from the publicly available CommonCrawl3 index of Panchenko et al …","url":["https://arxiv.org/pdf/1809.06152"]} +{"year":"2018","title":"CERES: Distantly Supervised Relation Extraction from the Semi-Structured Web","authors":["C Lockard, XL Dong, A Einolghozati, P Shiralkar - arXiv preprint arXiv:1804.04635, 2018"],"snippet":"Page 1. CERES: Distantly Supervised Relation Extraction from the Semi-Structured Web Colin Lockard ∗ University of Washington lockardc@cs.washington.edu Xin Luna Dong Amazon lunadong@amazon.com Arash Einolghozati ∗ Facebook arashe@fb.com …","url":["https://arxiv.org/pdf/1804.04635"]} +{"year":"2018","title":"Character-based Neural Networks for Sentence Pair Modeling","authors":["W Lan, W Xu - arXiv preprint arXiv:1805.08297, 2018"],"snippet":"… word vectors (Pennington et al., 2014), trained on 27 billion words from Twitter (vocabulary size of 1.2 milion words) for social media datasets, and 300dimensional GloVe vectors, trained on 840 billion words (vocabulary …","url":["https://arxiv.org/pdf/1805.08297"]} +{"year":"2018","title":"Character-Level Feature Extraction with Densely Connected Networks","authors":["C Lee, YB Kim, D Lee, HS Lim - arXiv preprint arXiv:1806.09089, 2018"],"snippet":"… Our model uses the GloVe (Pennington et al., 2014) 300-dimensional vectors trained on the Common Crawl corpus with 42B tokens as word level features, as this resulted in the best performance in preliminary experiments …","url":["https://arxiv.org/pdf/1806.09089"]} +{"year":"2018","title":"Characterising dataset search—An analysis of search logs and data requests","authors":["E Kacprzak, L Koesten, LD Ibáñez, T Blount… - Journal of Web Semantics, 2018"],"snippet":"… In 2015 the Web Data Commons project extracted 233 million data tables from the Common Crawl [3]. The ability to generate business value from data analytics offers competitive advantage in virtually every industry worldwide …","url":["https://www.sciencedirect.com/science/article/pii/S1570826818300556"]} +{"year":"2018","title":"CIKM AnalytiCup 2017 Lazada Product Title Quality Challenge An Ensemble of Deep and Shallow Learning to predict the Quality of Product Titles","authors":["K Singh, V Sunder - arXiv preprint arXiv:1804.01000, 2018"],"snippet":"… calculate semantic similarity based features. For each pair of words in a title, we calculate the cosine distance between vectors of words generated from Common crawl google glove [Pennington et al. 2014]. Further, we normalize …","url":["https://arxiv.org/pdf/1804.01000"]} +{"year":"2018","title":"ClaiRE at SemEval-2018 Task 7-Extended Version","authors":["L Hettinger, A Dallmann, A Zehe, T Niebler, A Hotho"],"url":["https://www.arxiv-vanity.com/papers/1804.05825/"]} +{"year":"2018","title":"ClaiRE at SemEval-2018 Task 7: Classification of Relations using Embeddings","authors":["L Hettinger, A Dallmann, A Zehe, T Niebler, A Hotho - Proceedings of The 12th …, 2018"],"snippet":"… As a baseline, we employ a publicly available set of 300-dimensional word embeddings trained with GloVe (Pennington et al., 2014) on the Common Crawl data4 (CC) … 4http://commoncrawl.org/ 5https://arxiv.org 6https://arxiv.org/help/bulk_data …","url":["http://www.aclweb.org/anthology/S18-1134"]} +{"year":"2018","title":"Classifying Occupations According to Their Skill Requirements in Job Advertisements","authors":["J Djumalieva, A Lima, C Sleeman - 2018"],"snippet":"… There are publicly available pre-trained word embeddings models. We use a GloVe model, which contains a vocabulary of 2.2 million words and was trained using word to word co-occurrences in a Common Crawl corpus (Pennington, Socher, and Manning, 2014) …","url":["https://www.escoe.ac.uk/wp-content/uploads/2018/03/ESCoE-DP-2018-04.pdf"]} +{"year":"2018","title":"Cloud repository as a malicious service: challenge, identification and implication","authors":["X Liao, S Alrwais, K Yuan, L Xing, XF Wang, S Hao… - Cybersecurity, 2018"],"snippet":"… Running the scanner over all the data collected by the Common Crawl (Crawl 2015), which indexed five billion web pages, for those associated with all major cloud storage providers (including Amazon S3, Cloudfront, Google …","url":["https://link.springer.com/article/10.1186/s42400-018-0015-6"]} +{"year":"2018","title":"Clustering-Oriented Representation Learning with Attractive-Repulsive Loss","authors":["K Kenyon-Dean, A Cianflone, L Page-Caccia… - arXiv preprint arXiv …, 2018"],"snippet":"… Each sample contains the news article's title concatenated with its description. As input to our models, we use the 300-dimensional 840B Common Crawl pretrained Glove word embeddings (Pennington, Socher, and Manning 2014) provided by Stanford6 …","url":["https://arxiv.org/pdf/1812.07627"]} +{"year":"2018","title":"Code-Switched Named Entity Recognition with Embedding Attention","authors":["C Wang, K Cho, D Kiela - Proceedings of the Third Workshop on Computational …, 2018"],"snippet":"… We use high-quality FastText embeddings trained on Common Crawl (Grave et al., 2018; Mikolov et al., 2018) and employ shortcut-stacked sentence encoders (Nie and Bansal, 2017) to obtain deep token-level representations to feed into the CRF …","url":["http://www.aclweb.org/anthology/W18-3221"]} +{"year":"2018","title":"Collaborative Context-Aware Visual Question Answering","authors":["AS Toor - 2018"],"snippet":"Page 1. COLLABORATIVE CONTEXT-AWARE VISUAL QUESTION ANSWERING by Andeep Singh Toor A Dissertation Submitted to the Graduate Faculty of George Mason University In Partial fulfillment of The Requirements …","url":["http://search.proquest.com/openview/d4ca94e1edcc41c3401b171ab495c21c/1?pq-origsite=gscholar&cbl=18750&diss=y"]} +{"year":"2018","title":"Combination of Statistical and Neural Machine Translation for Myanmar–English","authors":["B Marie, A Fujita, E Sumita"],"snippet":"… For English, we used the monolingual corpora provided by the WMT18 shared News Translation Task. As for Myanmar, we experimented with two monolingual corpora: Myanmar Wikipedia and Myanmar CommonCrawl … The CommonCrawl cor …","url":["http://benjaminmarie.com/pdf/WAT_2018_paper_7.pdf"]} +{"year":"2018","title":"Combining Convolution and Recursive Neural Networks for Sentiment Analysis","authors":["VD Van, T Thai, MQ Nghiem - Proceedings of the Eighth International Symposium on …, 2017"],"snippet":"… movie reviews. These words might not appear or not have good vector representations in the pre-trained Glove Common Crawl. 153 Page 4. SoICT '17, December 7–8, 2017, Nha Trang City, Viet Nam V. Van et al. Additionally …","url":["https://dl.acm.org/citation.cfm?id=3155158"]} +{"year":"2018","title":"Combining Shallow and Deep Learning for Aggressive Text Detection","authors":["V Golem, M Karan, J Šnajder - Proceedings of the First Workshop on Trolling …, 2018"],"snippet":"… Inputs to both models were GloVe (Pennington et al., 2014) 300-dimensional word embeddings trained on 840 billion tokens from the Common Crawl or 200-dimensional word embeddings trained on 20 billion tweets.5 Since …","url":["http://www.aclweb.org/anthology/W18-4422"]} +{"year":"2018","title":"Compact inverted index storage using general‐purpose compression libraries","authors":["M Petri, A Moffat - Software: Practice and Experience"],"snippet":"… We also develop (and make freely available) a new IR test collection based on the News sub-collection of the Common Crawl.6 The News sub-collection provides daily crawls of news websites in many languages. We refer to this collection as CC-NEWS-URL …","url":["http://onlinelibrary.wiley.com/doi/10.1002/spe.2556/full"]} +{"year":"2018","title":"Comparative Argument Mining","authors":["M Franzek"],"snippet":"… First, does a sentence compare two known objects? And secondly, if it does, is the first-mentioned object better or worse than the second-mentioned, according to the sentence context? The data set was created with data from …","url":["https://www.inf.uni-hamburg.de/en/inst/ab/lt/teaching/theses/completed-theses/2018-ma-mirco-franzek.pdf"]} +{"year":"2018","title":"Comparing Distributional and Frame Semantic Properties of Words","authors":["T Kleinbauer, TA Trost"],"snippet":"… G50 50 96.3% 6G Wikipedia & G100 100 96.3% Gigaword 5 G200 200 96.3% G300 300 96.3% G42 300 98.9% 42G Common Crawl G840 300 99.3% 840G Common Crawl GT25 25 90.2% 27G Twitter GT50 50 90.2% GT100 100 90.2% GT200 200 90.2 …","url":["https://www.oeaw.ac.at/fileadmin/subsites/academiaecorpora/PDF/konvens18_08.pdf"]} +{"year":"2018","title":"Comparing Open Source Search Engine Functionality, Efficiency and Effectiveness with Respect to Digital Forensic Search","authors":["J Hansen, K Porter, A Shalaginov, K Franke - NISK 2018, 2018"],"snippet":"… Malware ISCXAB Malware DAROA98/99 Network DARPA2000 Network MAWILab Network KDD Cup99 Network UNSW-NB15 Network NSA-CDX Network ADFA Network Name Catagory Kyoto data Network crawdad Network …","url":["http://nikt2018.ifi.uio.no/images/NISK2018_preproceedings.pdf#page=117"]} +{"year":"2018","title":"Comparing Theories of Speaker Choice Using a Model of Classifier Production in Mandarin Chinese","authors":["M Zhan, R Levy - Proceedings of the 2018 Conference of the North …, 2018"],"snippet":"Page 1. Proceedings of NAACL-HLT 2018, pages 1997–2005 New Orleans, Louisiana, June 1 - 6, 2018. c 2018 Association for Computational Linguistics Comparing Theories of Speaker Choice Using a Model of Classifier Production in Mandarin Chinese …","url":["http://www.aclweb.org/anthology/N18-1181"]} +{"year":"2018","title":"Comparison of Paragram and Glove Results for Similarity Benchmarks","authors":["PM Skłodowskiej-Curie - Multimedia and Network Information Systems, 2019"],"snippet":"… The TOEFL test set was introduced in [13]; the ESL test set was introduced in [28]. 4.1 Experimental Setup We use the unmodified vector space model trained on 840 billion words from Common Crawl data with the GloVe algorithm introduced in [21] …","url":["http://books.google.de/books?hl=en&lr=lang_en&id=cf5sDwAAQBAJ&oi=fnd&pg=PA236&dq=commoncrawl&ots=Zl8zpSyQVF&sig=p1R4RPXAKZFFv4LxOE09nc5dejg"]} +{"year":"2018","title":"Comparison of Word Embeddings and Sentence Encodings as Generalized Representations for Crisis Tweet Classification Tasks","authors":["H Li, X Li, D Caragea, C Caragea"],"snippet":"… the other parameters. • SIF: denotes the SIF approach, which is considered to be a baseline for sentence embeddings. The original paper used GloVe embeddings pre-trained on the Common Crawl data. However, we used …","url":["https://www.cs.uic.edu/~cornelia/papers/iscram_asian18.pdf"]} +{"year":"2018","title":"Compositional Source Word Representations for Neural Machine Translation","authors":["D Ataman, MA Di Gangi, M Federico - 2018"],"snippet":"… For training the Czech–English and German– English NMT models, we use the available data sets from the WMT2 (Bojar et al., 2017) shared task on machine translation of news, which consist of Europarl (Koehn, 2005), Commoncrawl …","url":["http://rua.ua.es/dspace/bitstream/10045/76018/1/EAMT2018-Proceedings_05.pdf"]} +{"year":"2018","title":"Concatenated $ p $-mean Word Embeddings as Universal Cross-Lingual Sentence Representations","authors":["A Rücklé, S Eger, M Peyrard, I Gurevych - arXiv preprint arXiv:1803.01400, 2018"],"snippet":"… Word embeddings We use four diverse, po- tentially complementary types of word embeddings as basis for our sentence representation techniques: GloVe embeddings (GV) (Pennington et al., 2014) trained on Common Crawl;","url":["https://arxiv.org/pdf/1803.01400"]} +{"year":"2018","title":"Content Extraction and Lexical Analysis from Customer-Agent Interactions","authors":["S Nisioi, A Bucur, LP Dinu - Proceedings of the 2018 EMNLP Workshop W-NUT …, 2018"],"snippet":"… Somewhat surprising is the fact that a big majority of words from the Common Crawl vocabulary (approx … Furthermore, both the lexicons used in questions and answers present little overlap with Common Crawl, and in accordance …","url":["http://www.aclweb.org/anthology/W18-6118"]} +{"year":"2018","title":"Context and Copying in Neural Machine Translation","authors":["R Knowles, P Koehn - Proceedings of the 2018 Conference on Empirical …, 2018"],"snippet":"… are trained using the Marian toolkit (Junczys-Dowmunt et al., 2018).2 We use the WMT parallel text3 (Europarl, News Commentary, and CommonCrawl) along with … We consider both the full training data (including backtranslations …","url":["http://www.aclweb.org/anthology/D18-1339"]} +{"year":"2018","title":"Context and Embeddings in Language Modelling–an Exploration","authors":["M Nitsche, M Tropmann-Frick - 2018"],"snippet":"… alone. GloVe typically performs better than Word2Vec skip-gram, especially when the vocabulary is large. GloVe is also available on different corpora such as Twitter, Common Crawl or Wikipedia. 2.2 Bag of Tricks - fastText …","url":["http://ceur-ws.org/Vol-2277/paper24.pdf"]} +{"year":"2018","title":"Context is Everything: Finding Meaning Statistically in Semantic Spaces","authors":["E Zelikman - arXiv preprint arXiv:1803.08493, 2018"],"snippet":"… For example ”dog” and ”but” are both commonly used words, and in many spaces will have a smaller M-distance than ”Dachshund” and ”dog.” One solution to this issue is presented later in this paper 5300 dimensional word …","url":["https://arxiv.org/pdf/1803.08493"]} +{"year":"2018","title":"Contextual-CNN: A Novel Architecture Capturing Unified Meaning for Sentence Classification","authors":["J Shin, Y Kim, S Yoon, K Jung"],"snippet":"… For word representation, we use the publicly available 300-dimensional GloVe trained on the Common Crawl with 42B tokens [5], hence our embedding dimension d = 300. Word embedding is normalized to unit norm and is fixed in the experiments without fine-tuning …","url":["http://milab.snu.ac.kr/pub/BigComp2018.pdf"]} +{"year":"2018","title":"Convexification of Neural Graph","authors":["H Xiao - arXiv preprint arXiv:1801.02901, 2018"],"snippet":"… Specifically, we initialize the word embedding with 300-dimensional GloVe [Pennington et al., 2014] word vectors pre-trained in the 840B Common Crawl corpus [Pennington et al., 2014] and then set the hidden dimension as 100 for each LSTM …","url":["https://arxiv.org/pdf/1801.02901"]} +{"year":"2018","title":"Convolutional Spatial Attention Model for Reading Comprehension with Multiple-Choice Questions","authors":["Z Chen, Y Cui, W Ma, S Wang, G Hu - arXiv preprint arXiv:1811.08610, 2018"],"snippet":"… among two datasets. The word embeddings are initialized by the pre-trained GloVe word vectors (Common Crawl, 6B tokens, 100-dimension) (Pennington, Socher, and Manning 2014), and keep fixed during training. The words …","url":["https://arxiv.org/pdf/1811.08610"]} +{"year":"2018","title":"COVER: a linguistic resource combining common sense and lexicographic information","authors":["E Mensa, DP Radicioni, A Lieto - Language Resources and Evaluation"],"snippet":"… (2014)). In count based models, model vectors are learned by applying dimensionality reduction techniques to the co-occurrence counts matrix; in particular, GloVe embeddings have been acquired through a training …","url":["https://link.springer.com/article/10.1007/s10579-018-9417-z"]} +{"year":"2018","title":"Creation and optimization of resource contents","authors":["M Tober, D Neumann - US Patent App. 15/284,739, 2018"],"snippet":"… 310. The crawler module 310 may automatically crawl a network and acquire contents from one or more resources in the network, acquire the contents from an open repository of web crawl data such as CommonCrawl.org …","url":["https://patents.google.com/patent/US20180096067A1/en"]} +{"year":"2018","title":"Cross-lingual Decompositional Semantic Parsing Supplemental Material","authors":["S Zhang, X Ma, R Rudinger, K Duh, B Van Durme"],"snippet":"… Hidden states are zero initialized. All other parameters are sampled from U(−0.1,0.1). Decoder: Word embeddings are initialized by open-source GloVe vectors (Pennington et al., 2014) trained on Common Crawl 840B with 300 dimensions …","url":["http://anthology.aclweb.org/attachments/D/D18/D18-1194.Attachment.pdf"]} +{"year":"2018","title":"Cross-Lingual Propagation for Deep Sentiment Analysis","authors":["X Dong, G de Melo - Proceedings of the 32nd AAAI Conference on Artificial …, 2018"],"snippet":"… each mini-batch. 3.2 Embeddings The standard pre-trained word vectors used for English are the GloVe (Pennington, Socher, and Manning 2014) ones trained on 840 billion tokens of Common Crawl data2, 1https://www.irit …","url":["http://iiis.tsinghua.edu.cn/~weblt/papers/sentiment-propagation.pdf"]} +{"year":"2018","title":"Cross-Pair Text Representations for Answer Sentence Selection","authors":["K Tymoshenko, A Moschitti"],"snippet":"Page 1. Cross-Pair Text Representations for Answer Sentence Selection Kateryna Tymoshenko DISI, University of Trento (Adeptmind scholar) 38123 Povo (TN), Italy kateryna.tymoshenko@unitn.it Alessandro Moschitti∗ Amazon …","url":["https://m.media-amazon.com/images/G/01/amazon.jobs/Cross-PairTextRepresentationsforAnswerSentenceSelection._CB1538442919_.pdf"]} +{"year":"2018","title":"CrumbTrail: an Efficient Methodology to Reduce Multiple Inheritance in Knowledge Graphs","authors":["S Faralli, I Finocchi, SP Ponzetto, P Velardi - Knowledge-Based Systems, 2018"],"snippet":"… 5]. Finally, the availability of vast amounts of knowledge encoded in heterogeneous sources, including both structured – eg, Freebase [6], DBpedia [7] – semi-structured – eg, Wikipedia – as well as unstructured resources …","url":["https://www.sciencedirect.com/science/article/pii/S095070511830162X"]} +{"year":"2018","title":"CS224N Final SQuAD Improvements","authors":["RAV Misra - Context"],"snippet":"… U\"]. Page 5. 4 Experiments 4.1 Implementation and Metrics The model is trained and evaluated on the SQuAD dataset, and uses pretrained GloVe word vectors on the Common Crawl corpus (Pennington et al., 2014). Running …","url":["http://web.stanford.edu/class/cs224n/reports/6907927.pdf"]} +{"year":"2018","title":"CUNI team: CLEF eHealth Consumer Health Search Task 2018","authors":["S Saleh, P Pecina - CEUR Workshop Proceedings: Working Notes of CLEF, 2018"],"snippet":"… Document collection in the CLEF 2018 consumer health search task is created using CommonCrawl platform 1. First, the query set (described in Section 2.2) is submitted to Microsoft Bing APIs, and a list of domains is extracted from the top retrieved results …","url":["http://ceur-ws.org/Vol-2125/paper_201.pdf"]} +{"year":"2018","title":"CUNI Transformer Neural MT System for WMT18","authors":["M Popel - Proceedings of the Third Conference on Machine …, 2018"],"snippet":"… CzEng 1.7 57 065 618 424 543 184 Europarl v7 647 15 625 13 000 News Commentary v12 211 4 544 4 057 CommonCrawl 162 3 349 2 927 EN NewsCrawl 2016–17 47 483 934 981 CS NewsCrawl 2007–17 65 383 927 348 total 170 951 1 576 923 1 490 516 …","url":["http://www.aclweb.org/anthology/W18-6424"]} +{"year":"2018","title":"Cuttlefish: A Lightweight Primitive for Adaptive Query Processing","authors":["T Kaftan, M Balazinska, A Cheung, J Gehrke - arXiv preprint arXiv:1802.09180, 2018"],"snippet":"Page 1. Cuttlefish: A Lightweight Primitive for Adaptive Query Processing Tomer Kaftan University of Washington Magdalena Balazinska University of Washington Alvin Cheung University of Washington Johannes Gehrke Microsoft …","url":["https://arxiv.org/pdf/1802.09180"]} +{"year":"2018","title":"CytonMT: an Efficient Neural Machine Translation Open-source Toolkit Implemented in C++","authors":["X Wang, M Utiyama, E Sumita - arXiv preprint arXiv:1802.07170, 2018"],"snippet":"… Length penalty was applied through the formula in (Wu et al., 2016). Data Set # Sent. # Words Source Target CommonCrawl 2,399,123 54,572,703 58,869,785 Europarl v7 1,959,829 51,7061,34 54,327,972 News Comment. 223,153 5,689,117 5,660,789 Dev …","url":["https://arxiv.org/pdf/1802.07170"]} +{"year":"2018","title":"Dank Learning: Generating Memes Using Deep Neural Networks","authors":["V Peirson, L Abel, EM Tolunay - arXiv preprint arXiv:1806.04510, 2018"],"snippet":"… In this context, pretrained GloVe [3] word embeddings constitute the most suitable choice of word representation for our project, based on the fact that they have been trained on very large corpora, offering an option of words …","url":["https://arxiv.org/pdf/1806.04510"]} +{"year":"2018","title":"Data Augmentation and Deep Learning for Hate Speech Detection","authors":["K Hemker - 2018"],"snippet":"Page 1. IMPERIAL COLLEGE LONDON DEPARTMENT OF COMPUTING Data Augmentation and Deep Learning for Hate Speech Detection Author: Konstantin Hemker Supervisor: Dr. Björn Schuller Submitted in partial fulfillment …","url":["https://www.imperial.ac.uk/media/imperial-college/faculty-of-engineering/computing/public/1718-pg-projects/HemkerK-Data-Augmentation-and-Deep-Learning.pdf"]} +{"year":"2018","title":"Data Science with Vadalog: Bridging Machine Learning and Reasoning Data Science with Vadalog: Bridging Machine Learning and Reasoning","authors":["L Bellomarini, RR Fayzrakhmanov, G Gottlob…"],"snippet":"… including streams of structured or unstructured data from internal systems (eg, Enterprise Resource Planning, Workflow Management, and Supply Chain Management), external streams of unstructured data (eg, news and …","url":["https://www.groundai.com/project/data-science-with-vadalog-bridging-machine-learning-and-reasoning/"]} +{"year":"2018","title":"Data Science with Vadalog: Bridging Machine Learning and Reasoning","authors":["L Bellomarini, RR Fayzrakhmanov, G Gottlob… - arXiv preprint arXiv …, 2018"],"snippet":"… Workflow Management, and Supply Chain Management), external streams of unstructured data (eg, news and social media feeds, and Common Crawl1), publicly … 1 http://commoncrawl.org/ 2 http://www.cyc.com/researchcyc …","url":["https://arxiv.org/pdf/1807.08712"]} +{"year":"2018","title":"Data-Driven Language Understanding for Spoken Dialogue Systems","authors":["N Mrkšić - 2018"],"snippet":"Page 1. Data-Driven Language Understanding for Spoken Dialogue Systems Nikola Mrkšic Supervisor: Professor Steve Young Department of Engineering University of Cambridge This dissertation is submitted for the degree of Doctor …","url":["https://www.repository.cam.ac.uk/bitstream/handle/1810/276689/Mrksic-2018-PhD.pdf?sequence=1&isAllowed=y"]} +{"year":"2018","title":"Debunking Fake News One Feature at a Time","authors":["M Tosik, A Mallia, K Gangopadhyay - arXiv preprint arXiv:1808.02831, 2018"],"snippet":"… cosine tdidf Cosine similarity between headline/body TF-IDF vectors Real doc similarity Cosine similarity between averaged headline/body Common Crawl vectors Real doc similarity intro Cosine similarity between averaged …","url":["https://arxiv.org/pdf/1808.02831"]} +{"year":"2018","title":"Deep Extrofitting: Specialization and Generalization of Expansional Retrofitting Word Vectors using Semantic Lexicons","authors":["H Jo - arXiv preprint arXiv:1808.07337, 2018"],"snippet":"… The algorithm learns word vectors by making the dot products of word vectors equal to the logarithm of the words' probability of co-occurrence. We use glove.42B.300d trained on Common Crawl data, which has 1,917,493 unique words as 300-dimensional vectors …","url":["https://arxiv.org/pdf/1808.07337"]} +{"year":"2018","title":"Deep neural network architecture for sentiment analysis and emotion identification of Twitter messages","authors":["D Stojanovski, G Strezoski, G Madjarov, I Dimitrovski… - Multimedia Tools and …, 2018"],"snippet":"… SSWE 50 10M 137K Tweets word2vec 300 100B 3M GoogleNews GloVe Crawl 300 840B 2.2M Common Crawl GloVe Twitter 200 20B 1.2M Tweets Corpora size is expressed in token count with the exception of SSWE where only the number of tweets is provided …","url":["https://link.springer.com/article/10.1007/s11042-018-6168-1"]} +{"year":"2018","title":"Deep neural networks and distant supervision for geographic location mention extraction","authors":["A Magge, D Weissenbacher, A Sarker, M Scotch… - Bioinformatics, 2018"],"snippet":"AbstractMotivation. Virus phylogeographers rely on DNA sequences of viruses and the locations of the infected hosts found in public sequence databases like Gen.","url":["https://academic.oup.com/bioinformatics/article/34/13/i565/5045808"]} +{"year":"2018","title":"Deep Neural Networks for Query Expansion using Word Embeddings","authors":["A Imani, A Vakili, A Montazer, A Shakery - arXiv preprint arXiv:1811.03514, 2018"],"snippet":"… and newswire articles. This would make them more suitable for use in newswire collections. Using word embeddings pre-trained using common crawl data may yield better performance in web corpora. To summarize, the proposed …","url":["https://arxiv.org/pdf/1811.03514"]} +{"year":"2018","title":"Deep Semantic Learning for Conversational Agents","authors":["M Morisio, M Mensio - 2018"],"snippet":"Page 1. POLITECNICO DI TORINO Master of Science in Computer Engineering Master's Thesis Deep Semantic Learning for Conversational Agents Supervisor Prof. Maurizio Morisio Candidate Martino Mensio Tutor …","url":["https://www.researchgate.net/profile/Martino_Mensio/publication/324877915_Deep_Semantic_Learning_for_Conversational_Agents/links/5aeb205f45851588dd82cbc6/Deep-Semantic-Learning-for-Conversational-Agents.pdf"]} +{"year":"2018","title":"Deep-BGT at PARSEME Shared Task 2018: Bidirectional LSTM-CRF Model for Verbal Multiword Expression Identification","authors":["G Berk, B Erden, T Güngör - Proceedings of the Joint Workshop on Linguistic …, 2018"],"snippet":"… We chose the DEPREL tag as a feature in order to capture dependencies at sentence level. We use pre-trained word embeddings released by fastText (Grave et al., 2018), which were trained on Common Crawl and Wikipedia …","url":["http://www.aclweb.org/anthology/W18-4927"]} +{"year":"2018","title":"DeepPhish: Simulating Malicious AI","authors":["AC Bahnsen, I Torroledo, LD Camacho, S Villegas"],"snippet":"… Forest (RF) algorithm. Using a database comprised of one million legitimate URLs from the Common Crawl database, and one million phishing URLs from Phishtank, both models showed great statistical results. On one hand …","url":["https://albahnsen.com/wp-content/uploads/2018/05/deepphish-simulating-malicious-ai_submitted.pdf"]} +{"year":"2018","title":"DEPARTMENT OF INFORMATICS M. Sc. IN COMPUTER SCIENCE","authors":["II Koutsikakis, P Malakasiotis, J Pavlopoulos"],"snippet":"Page 1. DEPARTMENT OF INFORMATICS M.Sc. IN COMPUTER SCIENCE M.Sc. Thesis “Toxicity Detection in User Generated Content” Ioannis-Ion Koutsikakis ΕΥ1606 Supervisors: Ion Androutsopoulos Prodromos Malakasiotis …","url":["http://nlp.cs.aueb.gr/theses/koutsikakis_msc_thesis.pdf"]} +{"year":"2018","title":"Department of Informatics MSc in Computer Science","authors":["E Kyriakakis"],"snippet":"… tasks. Pretrained word vectors are usually trained in large corpora like Wikipedia or/and Common Crawl. Dozat et al. use 100D word2vec (Mikolov et al., 2013) pretrained (on Wikipedia and Common Crawl) word embeddings1. Dozat …","url":["http://www2.aueb.gr/users/ion/docs/kyriakakis_msc_thesis.pdf"]} +{"year":"2018","title":"Deriving Word Embeddings Using Multilingual Transfer Learning for Opinion Mining","authors":["K Stavridis, G Koloniari, E Keramopoulos - 2018 South-Eastern European Design …, 2018"],"snippet":"… A word's context refers to its neighboring words in a text corpus. The Word2Vec model was trained on a huge corpus from Google (news), while the GloVe model has several variations based on the training input such as Wikipedia, Twitter and Common Crawl …","url":["https://ieeexplore.ieee.org/abstract/document/8544940/"]} +{"year":"2018","title":"Detecting Misogynous Tweets","authors":["R Ahluwalia, E Shcherbinina, E Callow, A Nascimento… - Proceedings of the Third …, 2018"],"snippet":"… For comparison, we trained the same LSTM-networks using a fastText model trained on Common Crawl and Wikipedia text that maps words to 300-dimensional vectors [8]. Document-level embedding is an extension …","url":["http://ceur-ws.org/Vol-2150/AMI_paper3.pdf"]} +{"year":"2018","title":"Detecting Signs of Dementia Using Word Vector Representations","authors":["B Mirheidari, D Blackburn, T Walker, A Venneri… - Proc. Interspeech 2018, 2018"],"snippet":"… We initially used the 'w2vec' model pre-trained on the Google News dataset (3 million vocabulary size, 100 billions words and 300 vector size) and the 'GloVe' model pre-trained on the Common Crawl (2.2 million vocabulary …","url":["https://www.isca-speech.org/archive/Interspeech_2018/pdfs/1764.pdf"]} +{"year":"2018","title":"Detection of mergeable Wikipedia articles based on overlapping topics","authors":["R Wang, M Iwaihara"],"snippet":"… Google News 3M vocabulary 100 billion tokens Skip-gram Common Crawl 2.2M vocabulary 840 billion tokens Glove … The results are shown in Table 3 Table 3: Combination model result Combining embedding result(Common Crawl and Google News) accuracy …","url":["http://db-event.jpn.org/deim2018/data/papers/157.pdf"]} +{"year":"2018","title":"Determination of content score","authors":["A Kagoshima, K Londenberg, F Xu - US Patent App. 10/325,033, 2019","A Kagoshima, K Londenberg, F Xu - US Patent App. 15/337,268, 2018"],"snippet":"… 310. The crawler module 310 may automatically crawl a network and acquire contents from one or more resources in the network, or acquire the contents from an open repository of web crawl data such as CommonCrawl.org …","url":["https://patentimages.storage.googleapis.com/56/46/d7/219015bf015761/US10325033.pdf","https://patents.google.com/patent/US20180121430A1/en"]} +{"year":"2018","title":"Developing Chatbots","authors":["NK Manaswi - Deep Learning with Applications Using Python, 2018"],"snippet":"… 200. 6B. 400,000. GloVe. 10+10. GloVe. Wikipedia + Gigaword 5. 300. 6B. 400,000. GloVe. 10+10. GloVe. Common Crawl 42B. 300. 42B. 1.9M. GloVe. AdaGrad. GloVe. Common Crawl 840B. 300. 840B. 2.2M. GloVe. AdaGrad …","url":["https://link.springer.com/chapter/10.1007/978-1-4842-3516-4_11"]} +{"year":"2018","title":"Di-LSTM Contrast: A Deep Neural Network for Metaphor Detection","authors":["K Swarnkar, AK Singh - NAACL HLT 2018, 2018"],"snippet":"… variations of our model. We use 300-dimensional GloVe vectors (Pennington et al., 2014) trained on 6B Common Crawl corpus as word embeddings, setting the embeddings of out- of-vocabulary words to zero. To prevent overfit …","url":["http://www.cl.cam.ac.uk/~es407/papers/Fig-Lang2018-proceedings.pdf#page=127"]} +{"year":"2018","title":"Differential Approach to Web-Corpus Construction","authors":["T Shavrina"],"snippet":"… This approach does not allow saving maximum of metatext information, since all the boilerplate is deleted, but it allows to gather a lot in a short time (example – Common Crawl). 2) fitted – all the materials from the listed thousands of URLs are crawled …","url":["http://www.dialog-21.ru/media/4261/shavrina.pdf"]} +{"year":"2018","title":"Discovering Implicit Knowledge with Unary Relations","authors":["M Glass, A Gliozzo - Proceedings of the 56th Annual Meeting of the …, 2018"],"snippet":"… As part of the contributions for this paper, we release the benchmark to the re- search community providing the software needed to generate it from Common Crawl and DBpedia as an open source project2. As a baseline, we adapt a state of the art deep learning based …","url":["http://www.aclweb.org/anthology/P18-1147"]} +{"year":"2018","title":"Discovering Novel Emergency Events in Text Streams","authors":["D Deviatkin, A Shelmanov, D Larionov - 2018"],"snippet":"… of emergency related messages by incorporating more labeled data from CrisisLex corpora [24] and by exploring: ● Various embeddings: word-level: fastText [16] (trained on our own corpus / pre-trained on English Wikipedia) …","url":["http://ceur-ws.org/Vol-2277/paper36.pdf"]} +{"year":"2018","title":"Discovering Phonesthemes with Sparse Regularization","authors":["NF Liu, GA Levow, NA Smith"],"snippet":"… 3 Data For our experiments, we use 300-dimensional GloVe (Pennington et al., 2014) English word em- beddings trained on the cased Common Crawl. Many of the terms in the set of pretrained vectors are not English words …","url":["https://homes.cs.washington.edu/~nfliu/papers/liu+levow+smith.sclem2018.pdf"]} +{"year":"2018","title":"Discriminator at SemEval-2018 Task 10: Minimally Supervised Discrimination","authors":["A Kulmizev, M Abdou, V Ravishankar, M Nissim - Proceedings of The 12th …, 2018"],"snippet":"… The VSM used in our final submission consisted of an av- erage of three sets of embeddings: GloVe word embeddings trained on Common Crawl (840B to- kens) (Pennington et al., 2014), the same GloVe embeddings post …","url":["http://www.aclweb.org/anthology/S18-1167"]} +{"year":"2018","title":"Distinguishing attributes using text corpora and relational knowledge","authors":["R Speer, J Lowry-Duda"],"snippet":"… Unicode CLDR emoji data • word2vec, precomputed on Google News • GloVe, precomputed on the Common Crawl • fastText, customized to learn from parallel text, trained on OpenSubtitles 2016 We used the embeddings …","url":["http://blog.conceptnet.io/2018/06/naacl2018-poster.pdf"]} +{"year":"2018","title":"Distributed Evaluation of Subgraph Queries Using Worstcase Optimal LowMemory Dataflows","authors":["K Ammar, F McSherry, S Salihoglu, M Joglekar - arXiv preprint arXiv:1802.03760, 2018"],"snippet":"Page 1. Distributed Evaluation of Subgraph Queries Using Worst-case Optimal Low-Memory Dataflows Khaled Ammar†, Frank McSherry‡, Semih Salihoglu†, Manas Joglekar♯ †University of Waterloo, ‡ETH Zürich,♯Google …","url":["https://arxiv.org/pdf/1802.03760"]} +{"year":"2018","title":"Distributed Representations of Tuples for Entity Resolution","authors":["MESTS Joty, MON Tang - Proceedings of the VLDB Endowment, 2018","MESTS Joty, MON Tang - Proceedings of the VLDB Endowment,(11), 2018"],"snippet":"… is not always possible or feasible. For example, the popular GloVe dictionary is trained on the Common Crawl corpus, which is almost 2 TB requiring exorbitant computing re- sources. Given an unseen word, another simplistic …","url":["http://da.qcri.org/ntang/pubs/vldb18-deeper.pdf","https://pdfs.semanticscholar.org/334e/9eb88738671a5c9a53dea174586e885ec00b.pdf"]} +{"year":"2018","title":"DL Team at SemEval-2018 Task 1: Tweet Affect Detection using Sentiment Lexicons and Embeddings","authors":["D Kravchenko, L Pivovarova - Proceedings of The 12th International Workshop on …, 2018"],"snippet":"… We use the following two models: 1. Common Crawl: 300-dimensional vectors trained on huge Internet corpus of 840 billion tokens and 2.2 million distinct words … GloVe Common Crawl 46.93 53.98 43.66 56.31 66.38 59.26 Google …","url":["http://www.aclweb.org/anthology/S18-1025"]} +{"year":"2018","title":"DMCB at SemEval-2018 Task 1: Transfer Learning of Sentiment Classification Using Group LSTM for Emotion Intensity prediction","authors":["Y Kim, H Lee - Proceedings of The 12th International Workshop on …, 2018"],"snippet":"… We try five pre-trained word embeddings to choose the best one for the target model. Two are trained with GloVe (Pennington et al., 2014) using different data sets: one1 is trained with very large data in Common crawl, and the …","url":["http://www.aclweb.org/anthology/S18-1044"]} +{"year":"2018","title":"Domain Adapted Word Embeddings for Improved Sentiment Classification","authors":["PK Sarma, YI Liang, WA Sethares - arXiv preprint arXiv:1805.04576, 2018","PKSYL William, A Sethares"],"snippet":"… Word embedding Dimension GloVe 100 word2vec 300 LSA 70 CCA-DA 68 KCCA-DA 68 GloVe common crawl 300 AdaptGloVe 300 … WG ∈ R|VG|×d2 be the matrix of generic word embeddings (obtained by, eg, running …","url":["https://ar5iv.labs.arxiv.org/html/1805.04576","https://arxiv.org/pdf/1805.04576"]} +{"year":"2018","title":"Dropout during inference as a model for neurological degeneration in an image captioning network","authors":["B Li, R Zhang, F Rudzicz - arXiv preprint arXiv:1808.03747, 2018"],"snippet":"… The neural network model is implemented us- ing PyTorch (Paszke et al., 2017). We use GLoVe embeddings from SpaCy (Pennington et al., 2014; Honnibal and Johnson, 2015) trained on Common Crawl. The network was trained using Adam (Kingma and Ba, 2014) …","url":["https://arxiv.org/pdf/1808.03747"]} +{"year":"2018","title":"Dynamic Meta-Embeddings for Improved Sentence Representations","authors":["D Kiela, C Wang, K Cho - Proceedings of the 2018 Conference on Empirical …, 2018"],"snippet":"… Figure 4 shows a similar breakdown for open versus closed class. The analysis allows us to make several interesting observations: it appears that this model prefers GloVe embeddings, followed by the two FastText …","url":["http://www.aclweb.org/anthology/D18-1176"]} +{"year":"2018","title":"Effective Strategies for Combining Attention Mechanism with LSTM for Aspect-Level Sentiment Classification","authors":["K Shuang, X Ren, H Guo, J Loo, P Xu - Proceedings of SAI Intelligent Systems …, 2018"],"snippet":"… 346. Neg. 807. 196. 870. 128. 1560. 173. Total. 3608. 1120. 2328. 638. 6248. 692. 4.2 Experimental Settings. In the experiments, pre-trained word vectors are trained by Glove on Common Crawl Corpus size and they are used to …","url":["https://link.springer.com/chapter/10.1007/978-3-030-01057-7_62"]} +{"year":"2018","title":"Efficient and Robust Question Answering from Minimal Context over Documents","authors":["S Min, V Zhong, R Socher, C Xiong - arXiv preprint arXiv:1805.08092, 2018"],"snippet":"Page 1. Efficient and Robust Question Answering from Minimal Context over Documents Sewon Min1∗, Victor Zhong2, Richard Socher2, Caiming Xiong2 Seoul National University1, Salesforce Research2 shmsw25@snu …","url":["https://arxiv.org/pdf/1805.08092"]} +{"year":"2018","title":"Efficient Unsupervised Word Sense Induction, Disambiguation and Embedding","authors":["BH Soleimani, H Naderi, S Matwin"],"snippet":"… as sentiment analysis, translation, etc. 2 Proposed Method Similar to word embeddings, sense embeddings should also be learned from a large text corpus such as Wikipedia or Common Crawl data. In our method, we first calculate …","url":["https://bigdata1.research.cs.dal.ca/behrouz/publication/nipsw2018/NIPSW2018_EfficientWordSenseDisambiguation.pdf"]} +{"year":"2018","title":"Elastic ChatNoir: Search Engine for the ClueWeb and the Common Crawl","authors":["J Bevendorff, B Stein, M Hagen, M Potthast - European Conference on Information …, 2018"],"snippet":"Abstract Elastic ChatNoir (Search: www. chatnoir. eu Code: www. github. com/chatnoir-eu) is an Elasticsearch-based search engine offering a freely accessible search interface for the two ClueWeb corpora and the Common Crawl …","url":["https://link.springer.com/chapter/10.1007/978-3-319-76941-7_83"]} +{"year":"2018","title":"EmbNum: Semantic labeling for numerical values with deep metric learning","authors":["P Nguyen, K Nguyen, R Ichise, H Takeda - arXiv preprint arXiv:1807.01367, 2018"],"snippet":"… In a study of Lehmberg et al. [1], 233 million table data resources have extracted from the July 2015 version of the Common Crawl3. Additionally, Mitlohner et al … Table schema is 3 http://commoncrawl.org/ arXiv:1807.01367 …","url":["https://arxiv.org/pdf/1807.01367"]} +{"year":"2018","title":"Emergency Vocabulary","authors":["DM Nemeskey, A Kornai - Information Systems Frontiers"],"snippet":"… 2004) and CommonCrawl.1 In the normal course of events, emergencies like natural disasters, military escalation, epidemic outbreaks etc … 5 Originally, the embedding contains 2,196,017 words with the associated 300-dimension …","url":["https://link.springer.com/article/10.1007/s10796-018-9843-x"]} +{"year":"2018","title":"Emotion Detection and Classification in a Multigenre Corpus with Joint Multi-Task Deep Learning","authors":["S Tafreshi, M Diab - Proceedings of the 27th International Conference on …, 2018"],"snippet":"… Common training set for word embedding models are wiki+news, news, tweets, and common crawl, and the methods are Word2Vec (Mikolov et al., 2013), GloVe (Pennington et al., 2014), and fasText (Bojanowski et al., 2016) …","url":["http://www.aclweb.org/anthology/C18-1246"]} +{"year":"2018","title":"EmotionLines: An Emotion Corpus of Multi-Party Conversations","authors":["SY Chen, CC Hsu, CC Kuo, LW Ku - arXiv preprint arXiv:1802.08379, 2018"],"snippet":"… Given a utterance of M words, the one-hot encoding for utterance words is denoted by U = {w1, w2, w3,..., wM }. We first embed the words to the word embedding , which is publicly available 300-dimensional GloVe pre-trained on Common Crawl data (Pennington et al., 2014) …","url":["https://arxiv.org/pdf/1802.08379"]} +{"year":"2018","title":"EmotionX-JTML: Detecting emotions with Attention","authors":["J Torres - Proceedings of the Sixth International Workshop on …, 2018"],"snippet":"… Each column of the matrix stores the em- beddings of the corresponding word, resulting in d dimensional input matrix M ∈ RM×d. The weights of the word embeddings use the 300dimensional GloVe Embeddings …","url":["http://www.aclweb.org/anthology/W18-3510"]} +{"year":"2018","title":"End-to-End Retrieval in Continuous Space","authors":["D Gillick, A Presta, GS Tomar - arXiv preprint arXiv:1811.08008, 2018"],"snippet":"… embeddings using Inverse Document Frequency (IDF)4, and try 3 different settings for pre-training: standard word2vec, word2vec trained with the Paralex dataset (closer to the question domain), and Glove (Pennington …","url":["https://arxiv.org/pdf/1811.08008"]} +{"year":"2018","title":"Energy-Aware Data Throughput Optimization for Next Generation Internet","authors":["T Kosar, I Alan, MF Bulut - Information Sciences, 2018"],"snippet":"… Three different representative datasets were used during experiments in order to capture the throughput and power consumption differences based on the dataset type: (i) the HTML dataset is a set of raw HTML files from the …","url":["https://www.sciencedirect.com/science/article/pii/S0020025518307874"]} +{"year":"2018","title":"Engaging English Speaking Facebook Users in an Anti-ISIS Awareness Campaign","authors":["A Shajkovci - Journal of Strategic Security, 2018"],"snippet":"… STRONG VERY LOW 0 133 able, act, adult, advantage, aggressive, agreement, army, assistance, attack, aware, benefit, b . . . WEAK LOW 16 41 admit, apologize, ashamed, bum, common, crawl, death, depression, desperate, die, disgrace, dis …","url":["https://scholarcommons.usf.edu/cgi/viewcontent.cgi?article=1679&context=jss"]} +{"year":"2018","title":"English-Basque Statistical and Neural Machine Translation","authors":["IJ Unanue, LG Arratibel, EZ Borzeshi, M Piccardi - Proceedings of the Eleventh …, 2018"],"snippet":"… For English, we have used the available CommonCrawl pre-trained embeddings3. We have evaluated the use of these embeddings in two different ways: maintaining them fixed during both training and testing (f-emb) …","url":["http://www.aclweb.org/anthology/L18-1141"]} +{"year":"2018","title":"Entity-Oriented Search","authors":["K Balog - 2018"],"snippet":"Page 1. The Information Retrieval Series Krisztian Balog EntityOriented Search Page 2. The Information Retrieval Series Volume 39 Series Editors ChengXiang Zhai Maarten de Rijke Editorial Board Nicholas J. Belkin Charles …","url":["https://link.springer.com/content/pdf/10.1007/978-3-319-93935-3.pdf"]} +{"year":"2018","title":"ERASeD: Exposing Racism And Sexism using Deep Learning","authors":["S Paul, J Bhaskaran","SPJ Bhaskaran"],"snippet":"… We tried out the following methods: • Pretrained Embeddings – GloVe (Twitter) – GloVe (Common Crawl) The biggest drawback of using pretrained embeddings is that 20% of the words we see in the dataset from Twitter do not have embeddings …","url":["http://web.stanford.edu/class/cs224n/reports/6908288.pdf","https://pdfs.semanticscholar.org/7d21/50c3d9028b6b58fb7fb01d70e4ffe08d2c58.pdf"]} +{"year":"2018","title":"eSCAPE: a Large-scale Synthetic Corpus for Automatic Post-Editing","authors":["M Negri, M Turchi, R Chatterjee, N Bertoldi - arXiv preprint arXiv:1803.07274, 2018"],"snippet":"… It consists of 4.3 million instances created by first filtering a subset of IT-related sentences from the German Common Crawl corpus6, and then by using two English–German and German–English PBMT systems trained on …","url":["https://arxiv.org/pdf/1803.07274"]} +{"year":"2018","title":"Evaluating MT for massive open online courses","authors":["S Castilho, J Moorkens, F Gaspari, R Sennrich, A Way… - Machine Translation, 2018"],"snippet":"… 2012) as distributed in OPUS (Tiedemann 2012) – OPUS European Central Bank (ECB) – OPUS European Medicines Agency (EMEA) – OPUS EU Bookshop – OPUS OpenSubtitles5 – WMT News Commentary – WMT …","url":["https://link.springer.com/article/10.1007/s10590-018-9221-y"]} +{"year":"2018","title":"Evaluating the psychological plausibility of word2vec and GloVe distributional semantic models","authors":["I Kajic, C Eliasmith"],"snippet":"… In this work, we use pre-trained GloVe and word2vec vectors. GloVe vectors were trained using the Common Crawl dataset containing approximately 840 billion word tokens. Word2vec vectors were trained on the Google News …","url":["http://compneuro.uwaterloo.ca/files/publications/kajic.2018a.pdf"]} +{"year":"2018","title":"Evaluation of High-Dimensional Word Embeddings using Cluster and Semantic Similarity Analysis","authors":["S Atakishiyev - 2018"],"snippet":"Page 1. Evaluation of High-Dimensional Word Embeddings using Cluster and Semantic Similarity Analysis by Shahin Atakishiyev A thesis submitted in partial fulfillment of the requirements for the degree of Master of Science …","url":["https://www.researchgate.net/profile/Shahin_Atakishiyev/publication/325945300_Evaluation_of_High-Dimensional_Word_Embeddings_using_Cluster_and_Semantic_Similarity_Analysis/links/5b2de0ac0f7e9b0df5be7f0d/Evaluation-of-High-Dimensional-Word-Embeddings-using-Cluster-and-Semantic-Similarity-Analysis.pdf"]} +{"year":"2018","title":"Evaluation of sentence embeddings in downstream and linguistic probing tasks","authors":["CS Perone, R Silveira, TS Paula - arXiv preprint arXiv:1806.06259, 2018"],"snippet":"… 1024 Word2Vec (BoW, Google news) Self-supervised 300 p-mean (monolingual) – 3600 FastText (BoW, Common Crawl) Self-supervised 300 GloVe (BoW, Common Crawl) Self-supervised 300 USE (DAN) Supervised 512 USE (Transformer) Supervised 512 …","url":["https://arxiv.org/pdf/1806.06259"]} +{"year":"2018","title":"Evidence of semantic processing difficulty in naturalistic reading","authors":["C Shain, R Futrell, M van Schijndel, E Gibson…"],"snippet":"… a novel measure — incremental semantic relatedness — for three naturalistic reading time corpora: Dundee [12], UCL [7], and Natural Stories [9]. In particular, we embedded all three corpora using GloVe vectors [20] pretrained …","url":["https://vansky.github.io/assets/pdf/shain_etal-2018-cuny.pdf"]} +{"year":"2018","title":"Existing Web Archives","authors":["P Webster - The SAGE Handbook of Web History. London: Sage, 2018"]} +{"year":"2018","title":"Explicit Ensemble Attention Learning for Improving Visual Question Answering","authors":["V Lioutas, N Passalis, A Tefas - Pattern Recognition Letters, 2018"],"snippet":"Visual Question Answering (VQA) is among the most difficult multi-modal problems as it requires a machine to be able to properly understand a question about ar.","url":["https://www.sciencedirect.com/science/article/pii/S0167865518301600"]} +{"year":"2018","title":"Explicit Retrofitting of Distributional Word Vectors","authors":["G Glavaš, I Vulić - Proceedings of the 56th Annual Meeting of the …, 2018"],"snippet":"… al., 2013) using the Skip-Gram algorithm with Negative Sampling (SGNS) (Mikolov et al., 2013b) by Levy and Goldberg (2014b), using the context windows of size 2; (2) GLOVE-CC – vectors trained with the GloVe (Pennington …","url":["http://www.aclweb.org/anthology/P18-1004"]} +{"year":"2018","title":"Exploiting Efficient and Effective Lazy Semi-Bayesian Strategies for Text Classification","authors":["F Viegas, L Rocha, E Resende, T Salles, W Martins… - Neurocomputing, 2018"],"snippet":"Automatic Document Classification (ADC) has become the basis of many important applications, eg, authorship identification, opinion mining, spam filtering, co.","url":["https://www.sciencedirect.com/science/article/pii/S0925231218304636"]} +{"year":"2018","title":"Exploring and Mitigating Gender Bias in GloVe Word Embeddings","authors":["MFLC Vera"],"snippet":"… and the cosine similarity. Finally, to show that gender 100 biases exist across different types of embeddings, we use both the GloVe word embeddings 101 pretrained on Wiki and pretrained on Common Crawl. 102 Plotting all …","url":["https://pdfs.semanticscholar.org/512f/ccca0d3a5f454d5360d32d48e3d4d5ddd578.pdf"]} +{"year":"2018","title":"Exploring Graph-structured Passage Representation for Multi-hop Reading Comprehension with Graph Neural Networks","authors":["L Song, Z Wang, M Yu, Y Zhang, R Florian, D Gildea - arXiv preprint arXiv …, 2018"],"snippet":"Page 1. Exploring Graph-structured Passage Representation for Multi-hop Reading Comprehension with Graph Neural Networks Linfeng Song1∗, Zhiguo Wang2†, Mo Yu2†, Yue Zhang3, Radu Florian2 and Daniel Gildea1 …","url":["https://arxiv.org/pdf/1809.02040"]} +{"year":"2018","title":"Exploring Neural Methods for Parsing Discourse Representation Structures","authors":["R van Noord, L Abzianidze, A Toral, J Bos - arXiv preprint arXiv:1810.12579, 2018"],"snippet":"… We simply add the word embedding vector after the sequence of character-embeddings for each word in the input and still initialise these embeddings using the pre-trained GloVe embeddings. 6The Common …","url":["https://arxiv.org/pdf/1810.12579"]} +{"year":"2018","title":"Exploring Optimism and Pessimism in Twitter Using Deep Learning","authors":["C Caragea, LP Dinu, B Dumitru"],"snippet":"… We used mini-batches of 40 samples. Dropout rate was set to 0.5 and the classifier's last three layers have 300, 200, and 100 neurons. We used GloVe vectors (Pennington et al., 2014) trained on Common Crawl …","url":["https://www.cs.uic.edu/~cornelia/papers/emnlp18_op.pdf"]} +{"year":"2018","title":"Exploring speed and memory trade-offs for achieving optimum performance on SQuAD dataset","authors":["R Aksitov"],"snippet":"… bidirectional in this architecture was not producing any measurable gains for some time, until I also switched to Common Crawl embedding with … When I switched to CommonCrawl embeddings, I have found out that Tensorflow …","url":["http://web.stanford.edu/class/cs224n/reports/6909117.pdf"]} +{"year":"2018","title":"Exploring the importance of context and embeddings in neural NER models for task-oriented dialogue systems","authors":["P Jayarao, C Jain, A Srivastava - arXiv preprint arXiv:1812.02370, 2018"],"snippet":"… We used the publicly available Glove embeddings1 (Pennington et al., 2014) trained on Wikipedia 2014 corpus with dimension sizes 50 (G50W) and 300 (G300W) and another 300 di- mensional embeddings (G300C) trained on …","url":["https://arxiv.org/pdf/1812.02370"]} +{"year":"2018","title":"ExtRA: Extracting Prominent Review Aspects from Customer Feedback","authors":["Z Luo, S Huang, FF Xu, BY Lin, H Shi, K Zhu - … of the 2018 Conference on Empirical …, 2018"],"snippet":"Page 1. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3477–3486 Brussels, Belgium, October 31 - November 4, 2018. c 2018 Association for Computational Linguistics 3477 …","url":["http://www.aclweb.org/anthology/D18-1384"]} +{"year":"2018","title":"Extrofitting: Enriching Word Representation and its Vector Space with Semantic Lexicons","authors":["H Jo, SJ Choi - arXiv preprint arXiv:1804.07946, 2018"],"snippet":"… 3 Experiment Data 3.1 Pretrained Word Vectors GloVe (Pennington et al., 2014) has lots of variations in respect to word dimension, number of to- kens, and train sources. We used glove.6B trained on Wikipedia+Gigawords …","url":["https://arxiv.org/pdf/1804.07946"]} +{"year":"2018","title":"Facebook Use of Sensitive Data for Advertising in Europe","authors":["JG Cabañas, Á Cuevas, R Cuevas - arXiv preprint arXiv:1802.05030, 2018"],"snippet":"Page 1. arXiv:1802.05030v1 [cs.SI] 14 Feb 2018 Facebook Use of Sensitive Data for Advertising in Europe José González Caba˜nas Universidad Carlos III de Madrid jgcabana@it.uc3m.es ´Angel Cuevas Universidad Carlos III de Madrid acrumin@it.uc3m.es …","url":["https://arxiv.org/pdf/1802.05030"]} +{"year":"2018","title":"Facilitating mapping of control policies to regulatory documents","authors":["SB Tirumala, A Jagmohan, E Khabiri, TH Li… - US Patent App. 15/349,766, 2018"],"snippet":"… dictionary 206). The global corpora 203 can comprise a general internet-based collection of texts derived from various sources (eg, GUTENBERG®, REUTERS®, COMMON CRAWL®, and/or GOOGLE NEWS®). The regulatory …","url":["https://patents.google.com/patent/US20180137107A1/en"]} +{"year":"2018","title":"Fast Dictionary-Based Compression for Inverted Indexes","authors":["GE Pibiri, M Petri, A Moffat - 2019"],"snippet":"… We use the standard Gov2 collection containing 426 GiB of text; and CCNEWS, an English subset of the freely available NEWS subset of the CommonCrawl1, consisting of news articles in the period 09/01/16 to …","url":["http://pages.di.unipi.it/pibiri/papers/WSDM19.pdf"]} +{"year":"2018","title":"Filtering and Mining Parallel Data in a Joint Multilingual Space","authors":["H Schwenk - arXiv preprint arXiv:1805.09822, 2018"],"snippet":"… only a limited amount of parallel training data is available (4.5M sentences). 2.1M are high quality human translations and 2.4M are crawled and aligned sentences (Common Crawl corpus). As in other works, we use …","url":["https://arxiv.org/pdf/1805.09822"]} +{"year":"2018","title":"Fine-grained Entity Typing through Increased Discourse Context and Adaptive Classification Thresholds","authors":["S Zhang, K Duh, B Van Durme - arXiv preprint arXiv:1804.08000, 2018"],"snippet":"… 3.2 Hyperparameters We use open-source GloVe vectors (Pennington et al., 2014) trained on Common Crawl 840B with 300 dimensions to initialize word embeddings used in all encoders. All weight parameters are sampled from U(−0.01,0.01) …","url":["https://arxiv.org/pdf/1804.08000"]} +{"year":"2018","title":"Focused Crawling Through Reinforcement Learning","authors":["M Han, PH Wuillemin, P Senellart - International Conference on Web Engineering, 2018"],"snippet":"… efficiently crawl relevant pages. In future work, we hope to evaluate our method in larger and various datasets, such as full English Wikipedia and dataset from the site http://commoncrawl.org/, etc. Another challenging possibility …","url":["https://link.springer.com/chapter/10.1007/978-3-319-91662-0_20"]} +{"year":"2018","title":"Follow The Money: Online Piracy and Self-Regulation in the Advertising Industry","authors":["M Batikas, J Claussen, C Peukert - 2018"],"snippet":"… Get HTML source code files from Commoncrawl and parse them (∼500,000 webpages) … 10 Page 14. Figure 2: Timing of data snapshots 2016w7 2016w15 2016w23 2016w31 2016w39 2016w47 2017w3 2017w11 Note: Available data snapshots from Common Crawl …","url":["http://www.cesifo-group.de/DocDL/cesifo1_wp6852.pdf"]} +{"year":"2018","title":"Found in Translation: Learning Robust Joint Representations by Cyclic Translations Between Modalities: Supplementary Material","authors":["H Pham, PP Liang, T Manzini, LP Morency, B Poczós"],"snippet":"… visual and acoustic modalities. Language: We used 300 dimensional Glove word embeddings trained on 840 billion tokens from the common crawl dataset (Pennington, Socher, and Manning, 2014). These word embeddings …","url":["http://www.cs.cmu.edu/~pliang/papers/aaai2019_seq2seq_supp.pdf"]} +{"year":"2018","title":"Freezing Subnetworks to Analyze Domain Adaptation in Neural Machine Translation","authors":["B Thompson, H Khayrallah, A Anastasopoulos… - arXiv preprint arXiv …, 2018"],"snippet":"… 1 For De–En and Ru– En, we also use data from WMT 2017 (Bojar et al., 2017),2 which contains data from several sources: Europarl (parliamentary proceedings) (Koehn, 2005),3 News Commentary (political and economic …","url":["https://arxiv.org/pdf/1809.05218"]} +{"year":"2018","title":"From Mad Men to Maths Men: Concentration and Buyer Power in Online Advertising","authors":["F Decarolis, G Rovigatti"],"snippet":"Page 1. From Mad Men to Maths Men: Concentration and Buyer Power in Online Advertising ∗ Francesco Decarolis Gabriele Rovigatti PRELIMINARY AND INCOMPLETE PLEASE DO NOT CIRCULATE Abstract This paper …","url":["https://www.tse-fr.eu/sites/default/files/TSE/documents/ChaireJJL/Digital-Economics-Conference/Conference/decarolis_franceso.pdf"]} +{"year":"2018","title":"FROM TEXT CLASSIFICATION TO IMAGE CLUSTERING, PROBLEMS LESS OPTIMIZED","authors":["A Herandi - 2018"],"snippet":"… We have used similar models for our BiLSTM and BiGRU models. We are using two layers stacked on top of each other and use only the last output from each direction from the second layer for our inference, because we are 1 http://commoncrawl.org/ Page 18. 10 …","url":["https://uta-ir.tdl.org/uta-ir/bitstream/handle/10106/27492/HERANDI-THESIS-2018.pdf?sequence=1"]} +{"year":"2018","title":"Full text document","authors":["Z Bouraoui, S Jameel, S Schockaert"],"snippet":"… set3 (SG-GN). We also use two embeddings that have been learned with GloVe, one from the same english Wikipedia dump (GloVe-Wiki) and one from the 840B words Common Crawl data set4 (GloVe-CC). For relations with …","url":["https://kar.kent.ac.uk/67263/1/Shoaib-COLING-2018.pdf"]} +{"year":"2018","title":"Fully Convolutional Speech Recognition","authors":["N Zeghidour, Q Xu, V Liptchinsky, N Usunier… - arXiv preprint arXiv …, 2018"],"snippet":"… 5.83 12.69 (12k training hours AM, common crawl LM) Sequence-to- sequence [9] 3.54 11.52 3.82 12.76 … (speaker adaptation, 3k acoustic states) DeepSpeech 2 [7] 5 3.6 (12k training hours AM, common crawl LM) Frontend …","url":["https://arxiv.org/pdf/1812.06864"]} +{"year":"2018","title":"Garbage In, Garbage Out","authors":["H Sanders"],"snippet":"… VirusTotal 10 million URLs from January 2017 ≈ 4% malware CommonCrawl & PhishTank Sophos 10 million internal URLs from January … VirusTotal 10 million URLs from January 2017 ≈ 4% malware CommonCrawl & Phishtank …","url":["https://pdfs.semanticscholar.org/c940/a2144b1e5cc89bbeb9003b5f3541e50fc9e1.pdf"]} +{"year":"2018","title":"Gated Convolutional Neural Network for Sentence Matching","authors":["P Chen, W Guo, Z Chen, J Sun, L You - memory, 2018"],"snippet":"… 3.2. Training We implement our models on Tensorflow [26] and train them on an Nvidia GeForce GTX 1080 GPU. We initialize the word em- beddings with 300-dimensional GloVe word vectors pretrained from 840B Common Crawl corpus [19] …","url":["https://www.isca-speech.org/archive/Interspeech_2018/pdfs/0070.pdf"]} +{"year":"2018","title":"Generalizing and Improving Bilingual Word Embedding Mappings with a Multi-Step Framework of Linear Transformations","authors":["M Artetxe, G Labaka, E Agirre - 2018"],"snippet":"… The corpora used consisted of 2.8 billion words for English (ukWaC + Wikipedia + BNC), 1.6 billion words for Italian (itWaC), 0.9 billion words for German (SdeWaC), and 2.8 billion words for Finnish (Common Crawl from WMT 2016) …","url":["http://ixa.eus/sites/default/files/dokumentuak/11455/aaai18.pdf"]} +{"year":"2018","title":"Generating Wikipedia by Summarizing Long Sequences","authors":["PJ Liu, M Saleh, E Pot, B Goodrich, R Sepassi, L Kaiser… - arXiv preprint arXiv …, 2018"],"snippet":"… To encourage further research on large-scale summarization, we will release the URLs used in our experiments (the Wikipedia URL as well as the URLs of its references) that are available as part of the CommonCrawl dataset4, which is freely available for download …","url":["https://arxiv.org/pdf/1801.10198"]} +{"year":"2018","title":"Generation of code from text description with syntactic parsing and Tree2Tree model","authors":["A Stehnii - 2017"],"snippet":"Page 1 …","url":["http://www.er.ucu.edu.ua/bitstream/handle/1/1191/Stehnii-master-thesis.pdf?sequence=1&isAllowed=y"]} +{"year":"2018","title":"Global normalized reader systems and methods","authors":["J RAIMAN, J Miller - US Patent App. 15/706,486, 2018"],"snippet":"… prediction. The hidden dimensions of all recurrent layers were 200. In embodiments, the 300 dimensional 8.4B token Common Crawl GloVe vectors were used. Words missing from the Common Crawl vocabulary were set to zero. In …","url":["https://patentimages.storage.googleapis.com/de/9c/d5/dabff582992ff9/US20180300312A1.pdf"]} +{"year":"2018","title":"GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding","authors":["A Wang, A Singh, J Michael, F Hill, O Levy… - arXiv preprint arXiv …, 2018"],"snippet":"Page 1. arXiv:1804.07461v1 [cs.CL] 20 Apr 2018 GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding Alex Wang1, Amanpreet Singh1, Julian Michael2, Felix Hill3, Omer Levy2, and Samuel R. Bowman1 …","url":["https://arxiv.org/pdf/1804.07461"]} +{"year":"2018","title":"GraphIt-A High-Performance DSL for Graph Analytics","authors":["Y Zhang, M Yang, R Baghdadi, S Kamil, J Shun… - arXiv preprint arXiv …, 2018"],"snippet":"Page 1. GraphIt – A High-Performance DSL for Graph Analytics YUNMING ZHANG, MIT CSAIL MENGJIAO YANG, MIT CSAIL RIYADH BAGHDADI, MIT CSAIL SHOAIB KAMIL, Adobe Research JULIAN SHUN, MIT CSAIL SAMAN AMARASINGHE, MIT CSAIL …","url":["https://arxiv.org/pdf/1805.00923"]} +{"year":"2018","title":"Harnessing NLG to Create Finnish Poetry Automatically","authors":["M Hämäläinen"],"snippet":"… This is 3The semantic repository can be browsed and downloaded on https://mikakalevi. com/semfi/ 4http://commoncrawl.org/ due to the Finnish agreement rule which requires the case of an adjective attribute to agree with the case of the noun …","url":["http://computationalcreativity.net/iccc2018/sites/default/files/papers/ICCC_2018_paper_6.pdf"]} +{"year":"2018","title":"Heuristics-based Query Reordering for Federated Queries in SPARQL 1.1 and SPARQL-LD","authors":["T Yannakis, P Fafalios, Y Tzitzikas"],"snippet":"… The question is: how 3 http://commoncrawl.org/ 4 http://webdatacommons. org/structureddata/2016-10/stats/stats.html Page 2. 2 Thanos Yannakis, Pavlos Fafalios, and Yannis Tzitzikas can we efficiently query this large, distributed …","url":["http://l3s.de/~fafalios/files/pubs/fafalios2018_QuWeDa.pdf"]} +{"year":"2018","title":"HFL-RC System at SemEval-2018 Task 11: Hybrid Multi-Aspects Model for Commonsense Reading Comprehension","authors":["Z Chen, Y Cui, W Ma, S Wang, T Liu, G Hu - arXiv preprint arXiv:1803.05655, 2018"],"snippet":"… Page 4. 3 Experiments 3.1 Experimental Setups We listed the main hyper-parameters of our model in Table 1. The word embeddings are initialized by the pre-trained GloVe word vectors (Common Crawl, 6B tokens, 100-dimension) (Pennington et al., 2014) …","url":["https://arxiv.org/pdf/1803.05655"]} +{"year":"2018","title":"Hierarchical Neural Networks for Sequential Sentence Classification in Medical Scientific Abstracts","authors":["D Jin, P Szolovits - arXiv preprint arXiv:1808.06161, 2018"],"snippet":"… pus of Wikipedia 2014 + Gigaword 57 (denoted as “Glove-wiki”), fastText embeddings pre-trained on Wikipedia8 (denoted as “FastText-wiki”), and fastText embeddings initialized with the standard GloVe Common Crawl …","url":["https://arxiv.org/pdf/1808.06161"]} +{"year":"2018","title":"Historical Web as a Tool for Analyzing Social Change","authors":["R Schroeder, N Brügger, J Cowls - Second International Handbook of Internet …, 2018"],"snippet":"… are themselves responsible for the curation and selection of what to archive (eg, Archive-It), and, second, publicly available collections, archived by “amateurs,” with no cultural heritage obligations (eg, The Archive Team …","url":["https://link.springer.com/content/pdf/10.1007/978-94-024-1202-4_24-1.pdf"]} +{"year":"2018","title":"Hourglass-Incremental Graph Processing on Heterogeneous Infrastructures","authors":["P Joaquim"],"snippet":"… On June 2016 Facebook reportes[13] an average of 1.13 billion daily active users. Another example is the largest World Wide Web hyperlink graph made publicly available by Common Crawl[14] that has over 3.5 billion web pages and 128 billion hyperlinks between them …","url":["https://pdfs.semanticscholar.org/92bf/f88f18c632327f7d503c9a7ab72cb433b7c1.pdf"]} +{"year":"2018","title":"How to Do Things with Translations","authors":["J Jacobs - 2018"],"snippet":"… 17 Page 18. as the Common Crawl corpus19 often have vocabulary sizes v exceeding 2 million, thus re … 19http://commoncrawl.org/ 20For a comprehensive survey of the most popular embedding algorithms, see Goldberg (2017). 18 Page 19 …","url":["https://www-cs.stanford.edu/~jjacobs3/Jacobs-How_to_Do_Things_with_Translations.pdf"]} +{"year":"2018","title":"Human versus automatic quality evaluation of NMT and PBSMT","authors":["D Shterionov, R Superbo, P Nagle, L Casanellas… - Machine Translation, 2018"],"snippet":"… 2015) is built with TED talks, EPPS, news commentary,8 and Common Crawl data; the NMT system they compare to (Luong and Manning 2015) is a pre-trained NMT system that was further improved with data provided by the IWSLT2015 organizers …","url":["https://link.springer.com/article/10.1007/s10590-018-9220-z"]} +{"year":"2018","title":"Hybrid Self-Attention Network for Machine Translation","authors":["K Song, T Xu, F Peng, J Lu - arXiv preprint arXiv:1811.00253, 2018"],"snippet":"… 2016) shared vocabulary. WMT14 English-German WMT14 EnglishGerman dataset (Buck, Heafield, and van Ooyen 2014) comprises about 4.5 million sentence pairs that are extracted from three corpora: Common …","url":["https://arxiv.org/pdf/1811.00253"]} +{"year":"2018","title":"Hypothesis Only Baselines in Natural Language Inference","authors":["A Poliak, J Naradowsky, A Haldar, R Rudinger… - arXiv preprint arXiv …, 2018"],"snippet":"… Following Conneau et al. (2017), we map the resulting to- kens to 300-dimensional GloVe vectors (Pennington et al., 2014) trained on 840 billion tokens from the Common Crawl, using the GloVe OOV vector for unknown words …","url":["https://arxiv.org/pdf/1805.01042"]} +{"year":"2018","title":"Identifying Semantic Divergences in Parallel Text without Annotations","authors":["Y Vyas, X Niu, M Carpuat - arXiv preprint arXiv:1803.11112, 2018","YVXNM Carpuat"],"snippet":"… On the CommonCrawl test set, the examples with disagreement are more diverse, ranging from Page 7. Divergence Detection OpenSubtitles Common Crawl Approach +P +R +F -P -R -F Overall F +P +R +F -P -R …","url":["https://arxiv.org/pdf/1803.11112","https://deeplearn.org/arxiv/30360/identifying-semantic-divergences-in-parallel-text-without-annotations"]} +{"year":"2018","title":"Identifying the Most Effective Feature Category in Machine Learning-based Phishing Website Detection","authors":["CL Tan, KL Chiew, N Musa, DHA Ibrahim - International Journal of Engineering & …, 2018"],"snippet":"… [31] “Common Crawl”, available online: http://commoncrawl.org/, last visit: 10.01.2017. [32] Selenium Project (2017), “Selenium WebDriver”, available online: http://www.seleniumhq.org/projects/webdriver/, last visit: 10.01.2017 …","url":["https://www.researchgate.net/profile/Choon_Lin_Tan/publication/329554643_Identifying_the_Most_Effective_Feature_Category_in_Machine_Learning-based_Phishing_Website_Detection/links/5c0f183e299bf139c74fb929/Identifying-the-Most-Effective-Feature-Category-in-Machine-Learning-based-Phishing-Website-Detection.pdf"]} +{"year":"2018","title":"Improved Text Analytics Transfer Learning","authors":["M Riemer, E Khabiri, R Goodwin - 2018"],"snippet":"… Our GRU model was fed a sequence of fixed 300 dimensional Glove vectors (Pennington et al., 2014), representing words based on analysis of 840 billion words from a common crawl of the internet, as the input xt for all tasks …","url":["https://openreview.net/pdf?id=HyggjiiMz6m"]} +{"year":"2018","title":"Improving Cross-Lingual Word Embeddings by Meeting in the Middle","authors":["Y Doval, J Camacho-Collados, L Espinosa-Anke… - arXiv preprint arXiv …, 2018"],"snippet":"… For Italian and German, we use the itWaC and sdeWaC corpora from the WaCky project (Baroni et al., 2009), containing 2 and 0.8 billion words, respectively.2 Lastly, for Finnish, we use the Common Crawl monolingual …","url":["https://arxiv.org/pdf/1808.08780"]} +{"year":"2018","title":"Improving Machine Translation of Educational Content via Crowdsourcing","authors":["M Behnke, AVM Barone, R Sennrich, V Sosoni…"],"snippet":"… et al., 2012) as distributed in OPUS (Tiedemann, 2012); ● OPUS European Central Bank (ECB); ● OPUS European Medicines Agency (EMEA); ● OPUS EU Bookshop; ● OPUS OpenSubtitles 7; ● WMT News Commentary; ● …","url":["http://www.lrec-conf.org/proceedings/lrec2018/pdf/855.pdf"]} +{"year":"2018","title":"Improving Neural Machine Translation with External Information","authors":["J Helcl"],"snippet":"… image descriptions, we selected similar sentence pairs from the parallel SDEWAC corpus (Faaß and Eckart, 2013) and German parts of WMT parallel corpora, such as EU Bookshop (Skadinš et al., 2014), News Commentary (Tiedemann, 2012), and CommonCrawl (Smith et al …","url":["https://pdfs.semanticscholar.org/5449/aa3ca855cee1a80fe5f6bc97f7c0742e3e99.pdf"]} +{"year":"2018","title":"Improving Personalized Consumer Health Search: Notebook for eHealth at CLEF 2018","authors":["H Yang, T Gonçalves - CEUR Workshop Proceedings: Working Notes of CLEF, 2018"],"snippet":"… Page 4. 3.2 Dataset The dataset used in CLEF 2018 CHS task consisted of web pages acquired from the CommonCrawl. By submitting the task queries to the Microsoft Bing APIs repeatedly over a period of time, an initial …","url":["http://ceur-ws.org/Vol-2125/paper_195.pdf"]} +{"year":"2018","title":"Improving the learning of chemical-protein interactions from literature using transfer learning and word embeddings.","authors":["P Corbett, J Boyle"],"snippet":"… GloVe offers both some public pre-trained embeddings (based on Wikipedia and Common Crawl), and also the software to compile your own – in previous work (6) we had success with the public embeddings, but here, we compiled our own …","url":["http://www.biocreative.org/media/store/files/2018/BC6_track5_10.pdf"]} +{"year":"2018","title":"Improving Word Embedding Compositionality using Lexicographic Definitions","authors":["T Scheepers, E Gavves, E Kanoulas - 2017"],"snippet":"… We used GloVe representations that where trained on the Common Crawl which has 840B tokens and a vocabulary of 2.2M. fastText uses a training algorithm that is based on skip-gram, however it represents each word as a bag of character n-grams …","url":["https://thijs.ai/papers/scheepers-gavves-kanoulas-improving-embedding-lexicographic.pdf"]} +{"year":"2018","title":"Improving Word Embedding Coverage in Less-Resourced Languages Through Multi-Linguality and Cross-Linguality: A Case Study with Aspect-Based Sentiment …","authors":["MS Akhtar, P Sawant, S Sen, A Ekbal, P Bhattacharyya - ACM Transactions on Asian …, 2018","S SEN, A EKBAL"],"snippet":"… However, these systems require a large corpus for training and building the models (eg, the pretrained Google News Word2Vec model was trained on 100 billion words, pretrained common-crawl GloVe model was trained on 840 billion words) …","url":["https://dl.acm.org/citation.cfm?id=3273931","https://dl.acm.org/doi/fullHtml/10.1145/3273931"]} +{"year":"2018","title":"Incorporating Statistical Machine Translation Word Knowledge into Neural Machine Translation","authors":["X Wang, Z Tu, M Zhang - IEEE/ACM Transactions on Audio, Speech, and …, 2018"],"snippet":"Page 1. 2329-9290 (c) 2018 IEEE. Personal use is permitted, but republication/ redistribution requires IEEE permission. See http://www.ieee.org/ publications_standards/publications/rights/index.html for more information. This …","url":["https://ieeexplore.ieee.org/abstract/document/8421063/"]} +{"year":"2018","title":"Incorporating the Structure of the Belief State in End-to-End Task-Oriented Dialogue Systems","authors":["L Shu, P Molino, M Namazifar, B Liu, H Xu, H Zheng…"],"snippet":"Page 1. Incorporating the Structure of the Belief State in End-to-End Task-Oriented Dialogue Systems Lei Shu∗1, Piero Molino2, Mahdi Namazifar3, Bing Liu1, Hu Xu1, Huaixiu Zheng3, and Gokhan Tur3 1University …","url":["http://alborz-geramifard.com/workshops/nips18-Conversational-AI/Papers/18convai-Incorporating%20the%20Structure.pdf"]} +{"year":"2018","title":"Inducing Grammars with and for Neural Machine Translation","authors":["K Tran, Y Bisk - arXiv preprint arXiv:1805.10850, 2018","Y Bisk, K Tran - Proceedings of the 2nd Workshop on Neural Machine …, 2018"],"snippet":"… Table 1 shows the statistics of the data. For En↔De, we use a concatenation of Europarl, Common Crawl, Rapid corpus of EU press releases, and News Commentary v12 … For En↔Ru, we use Common Crawl, News Commentary v12, and Yandex Corpus …","url":["http://www.aclweb.org/anthology/W18-2704","https://arxiv.org/pdf/1805.10850"]} +{"year":"2018","title":"Inducing Implicit Relations from Text Using Distantly Supervised Deep Nets","authors":["M Glass, A Gliozzo, O Hassanzadeh… - International Semantic Web …, 2018"],"snippet":"… We also evaluate on a web-scale knowledge base population benchmark that we called CC-DBP 5 . It combines the text of Common Crawl 6 with the triples from 298 frequent relations in DBpedia [1]. Mentions of DBpedia entities …","url":["https://link.springer.com/chapter/10.1007/978-3-030-00671-6_3"]} +{"year":"2018","title":"InferLite: Simple Universal Sentence Representations from Natural Language Inference Data","authors":["J Kiros, W Chan - Proceedings of the 2018 Conference on Empirical …, 2018"],"snippet":"… (2018) describe a contextual gating Feature dataset dim method Glove Common Crawl 300 News Google News 500 CBOW Query Google Search 800 CBOW Table 1: Comparison of word representations used. method for word embedding selection …","url":["http://www.aclweb.org/anthology/D18-1524"]} +{"year":"2018","title":"Inferring gender of Reddit users","authors":["E Vasilev - 2018"],"snippet":"Page 1. People and Knowledge Networks WeST Fachbereich 4: Informatik Institute for Web Science and Technologies Inferring gender of Reddit users Masterarbeit zur Erlangung des Grades einer Master of Science (M.Sc.) im Studiengang Web Science …","url":["https://kola.opus.hbz-nrw.de/files/1619/Master_thesis_Vasilev.pdf"]} +{"year":"2018","title":"Inferring Missing Categorical Information in Noisy and Sparse Web Markup","authors":["N Tempelmeier, E Demidova, S Dietze - arXiv preprint arXiv:1803.00446, 2018"],"snippet":"… information. For instance, from 26 million nodes describing events within the Common Crawl in 2016, 59% of nodes provide less than six statements and only 257,000 nodes (0.96%) are typed with more specific event subtypes …","url":["https://arxiv.org/pdf/1803.00446"]} +{"year":"2018","title":"Integrating Multiplicative Features into Supervised Distributional Methods for Lexical Entailment","authors":["T Vu, V Shwartz - arXiv preprint arXiv:1804.08845, 2018"],"snippet":"… We used 300-dimensional pre-trained word em- beddings, namely, GloVe (Pennington et al., 2014b) containing 1.9M word vectors trained on a corpus of web data from Common Crawl (42B to- kens),1 and Word2vec (Mikolov …","url":["https://arxiv.org/pdf/1804.08845"]} +{"year":"2018","title":"Intent Generation for Goal-Oriented Dialogue Systems based on Schema. org Annotations","authors":["U Şimşek, D Fensel - arXiv preprint arXiv:1807.01292, 2018"],"snippet":"… These vectors are trained based on existing ConceptNet 5.5 knowledge graph3 data, word2vec embeddings [11] trained with 300B words from Google News Dataset and GloVe embeddings [15] trained with 840B words from …","url":["https://arxiv.org/pdf/1807.01292"]} +{"year":"2018","title":"Interactions and influence of world painters from the reduced Google matrix of Wikipedia networks","authors":["SE Zant, K Jaffrès-Runser, KM Frahm… - arXiv preprint arXiv …, 2018"],"snippet":"… At present, directed networks of real systems can be very large (about 4.2 million articles for the English Wikipedia edition in 2013 [16] or 3.5 billion web pages for a publicly accessible web crawl that was gathered by the Common Crawl Foundation in 2012 [25]) …","url":["https://arxiv.org/pdf/1807.01255"]} +{"year":"2018","title":"Investigating open data portals automatically: a methodology and some illustrations","authors":["AS Correa, PO Zander, FSC da Silva - Proceedings of the 19th Annual International …, 2018"],"snippet":"Page 1. Investigating open data portals automatically: a methodology and some illustrations Andreiwid Sheffer Correa Federal Institute of Sao Paulo Campinas, Sao Paulo, Brazil University of Sao Paulo Sao Paulo, Brazil andreiwid@ifsp.edu.br …","url":["https://dl.acm.org/citation.cfm?id=3209292"]} +{"year":"2018","title":"Iterative reasoning with bi-directional attention flow for machine comprehension","authors":["ADA Gupta"],"snippet":"… [2016] pre-trained on the common crawl dataset … 4.2 Implementation Details We used pre-trained case sensitive 300-dimensional pre-trained word embeddings trained on Common Crawl and Wikipedia using fastText …","url":["http://web.stanford.edu/class/cs224n/reports/6908250.pdf"]} +{"year":"2018","title":"Iterative Recursive Attention Model for Interpretable Sequence Classification","authors":["M Tutek, J Šnajder - arXiv preprint arXiv:1808.10503, 2018"],"snippet":"… We clip the global norm of the gradients to 1.0 and set weight decay to 0.00003. We use 300-dimensional GloVe word embeddings trained on the Common Crawl corpus and 100-dimensional character embeddings. We follow the recommendation of Mu et al …","url":["https://arxiv.org/pdf/1808.10503"]} +{"year":"2018","title":"KnowMore-Knowledge Base Augmentation with Structured Web Markup","authors":["R Yua, U Gadirajua, B Fetahua, O Lehmbergb…"],"snippet":"Page 1. Semantic Web 0 (0) 1 1 IOS Press KnowMore - Knowledge Base Augmentation with Structured Web Markup Editor(s): Name Surname, University, Country Solicited review(s): Name Surname, University, Country Open review(s): Name Surname, University, Country …","url":["http://www.semantic-web-journal.net/system/files/swj1773.pdf"]} +{"year":"2018","title":"L'optimisation des plongements de mots pour le français: une application de la classification des phrases","authors":["J Park - Actes de la conférence Traitement Automatique de la …"],"snippet":"… c ATALA 2018 283 Page 299. Wikipedia Europarl New crawls Common crawl Giga in-domain total 675M 64M 223M 89M 793M 664M 2.5 B TABLE 1–The size (# of token) of corpora for embedding 2.3 Lemma-aware …","url":["https://hal.archives-ouvertes.fr/hal-01843560/document#page=296"]} +{"year":"2018","title":"LA3: A Scalable Link-and Locality-Aware Linear Algebra-Based Graph Analytics System","authors":["Y Ahmad, O Khattab, A Malik, A Musleh, M Hammoud… - Proceedings of the VLDB …, 2018"],"snippet":"Page 1. LA3: A Scalable Linkand Locality-Aware Linear Algebra-Based Graph Analytics System Yousuf Ahmad, Omar Khattab, Arsal Malik, Ahmad Musleh, Mohammad Hammoud Carnegie Mellon University in Qatar 1myahmad …","url":["http://www.vldb.org/pvldb/vol11/p920-ahmad.pdf"]} +{"year":"2018","title":"Language Modeling at Scale","authors":["M Patwary, M Chabbi, H Jun, J Huang, G Diamos… - arXiv preprint arXiv …, 2018"],"snippet":"… The figure shows four datasets: 1-Billion word [6] (1b), Gutenberg [7] (gb), Common crawl [8] (cc), and Amazon review [9] (ar) … The 4 lines correspond to 4 datasets: one Billion word (1b), Gutenberg (gb), Common Crawl (cc), and Amazon Review (ar). TABLE I DATASETS …","url":["https://arxiv.org/pdf/1810.10045"]} +{"year":"2018","title":"Language use shapes cultural norms: Large scale evidence from gender","authors":["M Lewis, G Lupyan"],"snippet":"… Results Figure 2 shows the effect size measures derived from the English Wikipedia corpus plotted against effect size estimates reported by CBN from two different models (trained on the Common Crawl and Google News corpora) … model Common Crawl (GloVe) …","url":["http://home.uchicago.edu/~mollylewis/papers/gender_cogsci_2018.pdf"]} +{"year":"2018","title":"Large scale distributed neural network training through online distillation","authors":["AT Passos, G Pereyra, G Hinton, G Dahl, R Ormandi… - 2018","R Anil, G Pereyra, A Passos, R Ormandi, GE Dahl… - arXiv preprint arXiv …, 2018"],"snippet":"… 2http://commoncrawl.org/2017/07/june-2017-crawl-archive-now-available/ 3It is still infeasible to train on all of this data set with large neural language models, so our experiments did not … We use the ADAM optimizer Kingma …","url":["https://arxiv.org/pdf/1804.03235","https://research.google.com/pubs/pub46642.html"]} +{"year":"2018","title":"Large-Scale Analysis of Style Injection by Relative Path Overwrite","authors":["S Arshad, SA Mirheidari, T Lauinger, B Crispo, E Kirda… - 2018"],"snippet":"… using RPO. We extract pages using relativepath stylesheets from the Common Crawl dataset [9], automatically test if style directives can be injected using RPO, and determine whether they are interpreted by the browser. Out …","url":["http://www.ccs.neu.edu/home/arshad/publications/www2018rpo.pdf"]} +{"year":"2018","title":"Latent Question Interpretation Through Parameter Adaptation Using Stochastic Neuron","authors":["T Parshakova, DS Kim"],"snippet":"… In effect, it led to maximally effective behaviour in the question-answering task. 6 Experiments Implementation Details For the word embeddings we use GloVe embeddings pretrained on the 840B Common Crawl corpus [Pennington et al., 2014] …","url":["http://ceur-ws.org/Vol-2134/paper07.pdf"]} +{"year":"2018","title":"Latent Semantic Analysis Approach for Document Summarization Based on Word Embeddings","authors":["K Al-Sabahi, Z Zuping, Y Kang - arXiv preprint arXiv:1807.02748, 2018"],"snippet":"… Training is performed on aggregated global word-word co-occurrence statistics from a corpus. In this work, we use the one trained on Common Crawl (840B tokens, 2.2M vocab, cased, 300d vectors, 2.03 GB download): glove.840B.300d.zip. 3.2. LSA algorithm …","url":["https://arxiv.org/pdf/1807.02748"]} +{"year":"2018","title":"LATEX GLOVES: Protecting Browser Extensions from Probing and Revelation Attacks","authors":["A Sjösten, S Van Acker, P Picazo-Sanchez, A Sabelfeld - Power, 2018"],"snippet":"Page 1. LATEX GLOVES: Protecting Browser Extensions from Probing and Revelation Attacks Alexander Sjösten∗, Steven Van Acker∗, Pablo Picazo-Sanchez and Andrei Sabelfeld Chalmers University of Technology …","url":["http://www.cse.chalmers.se/~andrei/ndss19.pdf"]} +{"year":"2018","title":"Learning Emotion-enriched Word Representations","authors":["A Agrawal, A An, M Papagelis - Proceedings of the 27th International Conference on …, 2018"],"snippet":"… GloVe 6B Wiki + Gigaword 6B tokens 400K GloVe 42B Common Crawl 42B tokens 1.9M … Paired t-tests using the results on all four datasets indicate EWE is significantly better than all the other methods with p-values < 0.02. uncased …","url":["http://www.aclweb.org/anthology/C18-1081"]} +{"year":"2018","title":"Learning Multimodal Word Representation via Dynamic Fusion Methods","authors":["S Wang, J Zhang, C Zong - arXiv preprint arXiv:1801.00532, 2018"],"snippet":"… Experimental Setup Datasets We use 300-dimensional GloVe vectors3 which are trained on the Common Crawl corpus consisting of 840B tokens and a vocabulary of 2.2M words. Our source of visual vectors are collected from ImageNet (Russakovsky et al …","url":["https://arxiv.org/pdf/1801.00532"]} +{"year":"2018","title":"Learning Neural Emotion Analysis from 100 Observations: The Surprising Effectiveness of Pre-Trained Word Representations","authors":["S Buechel, J Sedoc, HA Schwartz, L Ungar - arXiv preprint arXiv:1810.10949, 2018"],"snippet":"… For ANET, we rely on the FastText embeddings trained on Common Crawl (Mikolov et al., 2018). For ANPST and MAS, we use the FastText embeddings by Grave et al. (2018) trained on the respective Wikipedias … ANET English …","url":["https://arxiv.org/pdf/1810.10949"]} +{"year":"2018","title":"Learning Patent Speak: Investigating Domain-Specific Word Embeddings","authors":["J Risch, R Krestel"],"snippet":"… used to train word embeddings. It contains more than twice the number of tokens of the English Wikipedia (16 billion) and is only exceeded by the Common Crawl dataset, which consists of 600 billion tokens. We assume that the …","url":["https://www.researchgate.net/profile/Julian_Risch/publication/327317674_Learning_Patent_Speak_Investigating_Domain-Specific_Word_Embeddings/links/5b87ea484585151fd13c55ce/Learning-Patent-Speak-Investigating-Domain-Specific-Word-Embeddings.pdf"]} +{"year":"2018","title":"Learning representations for sentiment classification using Multi-task framework","authors":["H Meisheri, H Khadilkar - Proceedings of the 9th Workshop on Computational …, 2018"],"snippet":"… First set of embeddings are generated by using three different pre-trained embeddings. • Pre-trained embeddings which are generated from common crawl corpus have 6 Billion to- kens which help in a better encoding of the …","url":["http://www.aclweb.org/anthology/W18-6244"]} +{"year":"2018","title":"Learning Robust Joint Representations for Multimodal Sentiment Analysis","authors":["H Pham, PP Liang, T Manzini, LP Morency, B Póczos"],"snippet":"Page 1. Learning Robust Joint Representations for Multimodal Sentiment Analysis Hai Pham*, Paul Pu Liang*, Thomas Manzini, Louis-Philippe Morency, Barnabás Póczos School of Computer Science, Carnegie Mellon University …","url":["http://www.cs.cmu.edu/~pliang/papers/nips2018ws_translations.pdf"]} +{"year":"2018","title":"Learning semantic similarity in a continuous space","authors":["M Deudon - Advances in Neural Information Processing Systems, 2018"],"snippet":"… that connect those parts. Word2Vec [5] and GloVe [6] are semantic embeddings of words based on their context and frequency of co-occurence in a text corpus such as Wikipedia, Google News or Common-Crawl. The key idea …","url":["http://papers.nips.cc/paper/7377-learning-semantic-similarity-in-a-continuous-space.pdf"]} +{"year":"2018","title":"Learning to Rank Query Graphs for Complex Question Answering over Knowledge Graphs","authors":["G Maheshwari, P Trivedi, D Lukovnikov, N Chakraborty… - arXiv preprint arXiv …, 2018"],"snippet":"… All our models pe- formed poorly on QALD-7, with the best being DAM dot. 4in keeping with the train-test splits suggested by the authors. 5Trained over the common Crawl corpus. 300 dimensions, and with 1.9M tokens in the …","url":["https://arxiv.org/pdf/1811.01118"]} +{"year":"2018","title":"Learning Word Embeddings for Data Sparse and Sentiment Rich Data Sets","authors":["PK Sarma - Proceedings of the 2018 Conference of the North …, 2018"],"snippet":"… a very different distribution from generic corpora, so pre-trained generic word embeddings such as those trained on Common Crawl or Wikipedia … d2 be the matrix of generic word embeddings (obtained by, eg, running the GloVe …","url":["http://www.aclweb.org/anthology/N18-4007"]} +{"year":"2018","title":"Learning Word Meta-Embeddings by Autoencoding","authors":["C Bao, D Bollegala"],"snippet":"… (2014). The released version we used contains 1,917,494 word embeddings with dimensionality of 300 that is trained on Common Crawl dataset. The intersection of two vocabularies contains 154,076 words for which we create meta-embeddings …","url":["http://danushka.net/papers/aeme.pdf"]} +{"year":"2018","title":"Learning Word Vectors for 157 Languages","authors":["E Grave, P Bojanowski, P Gupta, A Joulin, T Mikolov - arXiv preprint arXiv …, 2018"],"snippet":"… In this work, we contribute word vectors trained on Wikipedia and the Common Crawl, as well as three new analogy datasets to evaluate these … We also show that using the common crawl data, while being noisy, can lead to models with larger coverage, and better models for …","url":["https://arxiv.org/pdf/1802.06893"]} +{"year":"2018","title":"Lecture Video Search Engine Using Hadoop MapReduce","authors":["PP Deolikar - 2017"],"snippet":"… While doing so, they dont need to bother. about crawling the data. Gil Elbaz, former Google employee funded Common CrawlCommonCrawl is implemented using Hadoop MapReduce Technology. Their strategy is to crawl. broadly and frequently, to cover all the web pages …","url":["http://search.proquest.com/openview/94314da495ab40e1ea0d1c2490d6d62a/1?pq-origsite=gscholar&cbl=18750&diss=y"]} +{"year":"2018","title":"Legal Deposit Web Archives and the Digital Humanities: A Universe of Lost Opportunity?","authors":["P Gooding, M Terras, L Berube - 2018"],"snippet":"… Restricted deposit library ac- cess requires researchers to look elsewhere for portable web data: by undertaking their own web crawls, or by utilising datasets from Common Crawl (http://commoncrawl. org/) and the Internet Archive (https://archive.org) …","url":["http://eprints.gla.ac.uk/168229/1/168229.pdf"]} +{"year":"2018","title":"Lessons from Natural Language Inference in the Clinical Domain","authors":["A Romanov, C Shivade - arXiv preprint arXiv:1808.06752, 2018"],"snippet":"… fastText[Wiki]: fastText embeddings (Bo- janowski et al., 2017), trained on Wikipedia. 7http://commoncrawl.org/ Page 7. • fastText[CC]: fastText embeddings, trained on Common Crawl. Furthermore, we trained fastText …","url":["https://arxiv.org/pdf/1808.06752"]} +{"year":"2018","title":"Lessons Learned and Research Agenda for Big Data Integration of Product Specifications (Discussion Paper)","authors":["L Barbosa, V Crescenzi, XL Dong, P Merialdo, F Piai…"],"snippet":"… About 68% of the sources discovered by our approach were not present in Common Crawl … these sources were product pages: on a sample set of 12 websites where Common Crawl presented … DEXTER) is an extension …","url":["http://ceur-ws.org/Vol-2161/paper29.pdf"]} +{"year":"2018","title":"Leveraging Meta-Embeddings for Bilingual Lexicon Extraction from Specialized Comparable Corpora","authors":["A Hazem, E Morin - Proceedings of the 27th International Conference on …, 2018"],"snippet":"… Common crawl corpus (CC) is an open repository of data collected over 7 years of web crawling sets of raw web page data and text extracts7. 4 www.sciencedirect.com/ 5 www.ttc-project.eu/index.php/releases-publications …","url":["http://www.aclweb.org/anthology/C18-1080"]} +{"year":"2018","title":"Leveraging textual properties of bug reports to localize relevant source files","authors":["R Gharibi, AH Rasekh, MH Sadreddini… - Information Processing & …, 2018"],"snippet":"Skip to main content …","url":["https://www.sciencedirect.com/science/article/pii/S0306457318301092"]} +{"year":"2018","title":"Lifelong Domain Word Embedding via Meta-Learning","authors":["H Xu, B Liu, L Shu, PS Yu - arXiv preprint arXiv:1805.09991, 2018"],"snippet":"… Page 6. GloVe.6B: This is the lower-cased embeddings pre-trained from Wikipedia and Gigaword 5, which has 6 billion tokens. GloVe.840B: This is the cased embeddings pre-trained from Common Crawl corpus, which has 840 billion tokens …","url":["https://arxiv.org/pdf/1805.09991"]} +{"year":"2018","title":"Linguistic Features of Helpfulness in Automated Support for Creative Writing","authors":["M Roemmele, AS Gordon"],"snippet":"… tween the context and suggestion in terms of their Jaccard similarity (proportion of overlapping words; Metric 8), GloVe word embeddings7 trained on the Common Crawl corpus (Metric 9), and sentence (skip-thought) …","url":["http://people.ict.usc.edu/~gordon/publications/NAACL-WS18B.PDF"]} +{"year":"2018","title":"List Intersection for Web Search: Algorithms, Cost Models, and Optimizations","authors":["S Kim, T Lee, S Hwang, S Elnikety - Proceedings of the VLDB Endowment"],"snippet":"Page 1. List Intersection for Web Search: Algorithms, Cost Models, and Optimizations Sunghwan Kim POSTECH sunghwan08@gmail.com Taesung Lee IBM Research AI Taesung.Lee@ibm.com Seung-won …","url":["http://www.vldb.org/pvldb/vol12/p1-kim.pdf"]} +{"year":"2018","title":"LMU Munich's Neural Machine Translation Systems at WMT 2018","authors":["M Huck, D Stojanovski, V Hangya, A Fraser - Proceedings of the Third Conference on …, 2018"],"snippet":"… BLEU (Pa- pineni et al., 2002)) of different system setups after several development stages is presented in the top section of Table 1. WMT_parallel de- notes the Europarl, News Commentary, and Common Crawl parallel …","url":["http://www.aclweb.org/anthology/W18-6446"]} +{"year":"2018","title":"Local Homology of Word Embeddings","authors":["T Temčinas - arXiv preprint arXiv:1810.10136, 2018"],"snippet":"Page 1. Local Homology of Word Embeddings Tadas Temcinas 1 Abstract Topological data analysis (TDA) has been widely used to make progress on a number of problems. However, it seems that TDA application in natural …","url":["https://arxiv.org/pdf/1810.10136"]} +{"year":"2018","title":"Local-Global Graph Clustering with Applications in Sense and Frame Induction","authors":["D Ustalov, A Panchenko, C Biemann, SP Ponzetto - arXiv preprint arXiv:1808.06696, 2018"],"snippet":"Page 1. Local-Global Graph Clustering with Applications in Sense and Frame Induction Dmitry Ustalov∗ University of Mannheim Alexander Panchenko∗∗ University of Hamburg Chris Biemann University of Hamburg Simone Paolo Ponzetto University of Mannheim …","url":["https://arxiv.org/pdf/1808.06696"]} +{"year":"2018","title":"Loss in Translation: Learning Bilingual Word Mapping with a Retrieval Criterion","authors":["A Joulin, P Bojanowski, T Mikolov, H Jégou, E Grave - Proceedings of the 2018 …, 2018"],"snippet":"… word vectors of Mikolov et al. (2018) to Spanish. It leads to an improvement of 1% both for vectors trained on Common Crawl (85% to 86%) and Wikipedia + News (87% to 88%). 5 Conclusion This paper shows that minimizing …","url":["http://www.aclweb.org/anthology/D18-1330"]} +{"year":"2018","title":"M1. 2–Corpora for the Machine Translation Engines","authors":["C Espana-Bonet, J Stiller, S Henning - 2018"],"snippet":"… Table 5: Size of the parallel corpora obtained from the different sources described in Section 2.1: United Nations (UN), Europarl V7 (EP), Common Crawl (ComCrawl), EMEA and Scielo. Figures of the PubPsych corpus are …","url":["https://www.clubs-project.eu/assets/publications/project/CLUBS_MTcorpora.v3.pdf"]} +{"year":"2018","title":"MAC: Mining Activity Concepts for Language-based Temporal Localization","authors":["R Ge, J Gao, K Chen, R Nevatia - arXiv preprint arXiv:1811.08925, 2018"],"snippet":"… We only keep the twoword tuple as our VO. ie open door, wash hands. After lemmatizing every word, we use the GloVe [22] word en- coder which is pre-trained on the Common Crawl data to get each word a 300-dimensional word embedding …","url":["https://arxiv.org/pdf/1811.08925"]} +{"year":"2018","title":"Machine Learning with and for Semantic Web Knowledge Graphs","authors":["H Paulheim - Reasoning Web International Summer School, 2018"],"snippet":"… To that end, it scans the Common Crawl Web corpus 20 for so-called Hearst patterns, eg X such as Y, to infer relations like X skos:broader Y. The graph comes with very detailed provenance data, including the patterns used …","url":["https://link.springer.com/chapter/10.1007/978-3-030-00338-8_5"]} +{"year":"2018","title":"Machine Reading Comprehension on SQuAD","authors":["DMT Tan"],"snippet":"… The end-to-end neural architecture consists of the following components. Page 3. Input vectors. For each word in context (C) and question (Q), we use the 300-dim GloVe embeddings pretrained on the 840B Common Crawl corpus [5] as input vectors …","url":["http://web.stanford.edu/class/cs224n/reports/6909266.pdf"]} +{"year":"2018","title":"MAJE Submission to the WMT2018 Shared Task on Parallel Corpus Filtering","authors":["M Fomicheva, J González-Rubio - Proceedings of the Third Conference on Machine …, 2018"],"snippet":"… We also conducted some initial experiments us- ing the Common Crawl corpus, under the rationale that it would be closer to the domain of the noisy data from the Paracrawl corpus. However, Common Crawl data …","url":["http://www.aclweb.org/anthology/W18-6476"]} +{"year":"2018","title":"Making Caches Work for Graph Analytics","authors":["Y Zhang, V Kiriansky, C Mendis, S Amarasinghe…"],"snippet":"… The system has 128GB of DDR3-1600 memory. The machine runs with Transparent Huge Pages (THP) enabled. Data Sets: We used the social networks, including LiveJournal [10] and Twitter [8], the SD web graph from 2012 common crawl [11], and the RMAT graphs …","url":["http://groups.csail.mit.edu/commit/papers/2017/zhang-bigdata17-cagra.pdf"]} +{"year":"2018","title":"Manifold Scholarship: Hybrid Publishing in a Print/Digital Era","authors":["MK Gold, J Karlin, Z Davis, P Gooding, M Terras… - Digital Humanities 2018 …, 2018"],"snippet":"… Restricted deposit library ac- cess requires researchers to look elsewhere for portable web data: by undertaking their own web crawls, or by utilising datasets from Common Crawl (http://commoncrawl. org/) and the Internet Archive (https://archive. org) …","url":["https://pdfs.semanticscholar.org/7cf9/f2db34ac1b5791de374b78b2fdbe97dc27af.pdf#page=590"]} +{"year":"2018","title":"Manual vs Automatic Bitext Extraction","authors":["A Makazhanov, B Myrzakhmetov, Z Assylbekov - Proceedings of the Eleventh …, 2018"],"snippet":"… Biometrika, 52(3/4):591–611. Smith, JR, Saint-Amand, H., Plamada, M., Koehn, P., Callison-Burch, C., and Lopez, A. (2013). Dirt cheap web-scale parallel text from the common crawl. In ACL (1), pages 1374–1383. Tiedemann, J. (2007) …","url":["http://www.aclweb.org/anthology/L18-1606"]} +{"year":"2018","title":"Margin-based Parallel Corpus Mining with Multilingual Sentence Embeddings","authors":["M Artetxe, H Schwenk - arXiv preprint arXiv:1811.01136, 2018"],"snippet":"Page 1. Margin-based Parallel Corpus Mining with Multilingual Sentence Embeddings Mikel Artetxe University of the Basque Country (UPV/EHU)∗ mikel.artetxe@ehu.eus Holger Schwenk Facebook AI Research schwenk@fb.com Abstract …","url":["https://arxiv.org/pdf/1811.01136"]} +{"year":"2018","title":"MEADE: Towards a Malicious Email Attachment Detection Engine","authors":["EM Rudd, R Harang, J Saxe - arXiv preprint arXiv:1804.08162, 2018"],"snippet":"… To this end, we collected a dataset of over 5 million malicious/benign Microsoft Office documents from VirusTotal for evaluation as well as a dataset of benign Microsoft Office documents from the Common Crawl …","url":["https://arxiv.org/pdf/1804.08162"]} +{"year":"2018","title":"Measuring Semantic Coherence of a Conversation","authors":["S Vakulenko, M de Rijke, M Cochez, V Savenkov… - arXiv preprint arXiv …, 2018"],"snippet":"… We utilise two publicly available word embedding models: GloVe embeddings pretrained on the Common Crawl corpus (2.2M words, 300 dimensions)14 and Word2Vec model trained on the Google News corpus …","url":["https://arxiv.org/pdf/1806.06411"]} +{"year":"2018","title":"Measuring the Effects of Data Parallelism on Neural Network Training","authors":["CJ Shallue, J Lee, J Antognini, J Sohl-Dickstein… - arXiv preprint arXiv …, 2018"],"snippet":"… For LM1B, we chose the Transformer model and an LSTM model. For Common Crawl, we chose the Transformer model … Common Crawl Text Language modeling ∼25.8 billion sentences Cross entropy error Table …","url":["https://arxiv.org/pdf/1811.03600"]} +{"year":"2018","title":"Meet the Data","authors":["K Balog - Entity-Oriented Search, 2018"],"snippet":"… Size refers to compressed data ahttps://lemurproject.org/clueweb09/ bhttps://lemurproject.org/clueweb12/ chttp://commoncrawl.org/2017/06/may … Common Crawl Common Crawl5 is a nonprofit organization that regularly …","url":["https://link.springer.com/content/pdf/10.1007/978-3-319-93935-3_2.pdf"]} +{"year":"2018","title":"Memory Fusion Network for Multi-view Sequential Learning","authors":["A Zadeh, PP Liang, N Mazumder, S Poria, E Cambria… - arXiv preprint arXiv …, 2018"],"snippet":"… The Glove embeddings used are 300 dimensional word embeddings trained on 840 billion tokens from the common crawl dataset, resulting in a sequence of dimension T × dxtext = T × 300 after alignment. The timing of word utterances is extracted using P2FA forced aligner …","url":["https://arxiv.org/pdf/1802.00927"]} +{"year":"2018","title":"Merging Search Results Generated by Multiple Query Variants Using Data Fusion","authors":["N Motlogelwa, T Leburu-Dingalo, E Thuma - CEUR Workshop Proceedings: Working …, 2018"],"snippet":"… In Section 3.2 we describe the queries used for retrieval. 3.1 Document Collection “The document collection used in CLEF 2018 consists of web pages acquired from the CommonCrawl. An initial list of websites was identified for acquisition …","url":["http://ceur-ws.org/Vol-2125/paper_194.pdf"]} +{"year":"2018","title":"METHODS AND SYSTEMS FOR QUERY SEGMENTATION","authors":["AG Kale, T Taula, A Srivastava, S Hewavitharana - US Patent App. 15/681,663, 2018"],"snippet":"… This process may be repeated but using pre-trained GloVe vectors on common crawl and facebook fasttext pretrained model over a Wikipedia corpus with 2.5 M word vocabulary. Table 2 shows the experimental results of this analysis …","url":["http://www.freepatentsonline.com/y2018/0329999.html"]} +{"year":"2018","title":"microNER: A Micro-Service for German Named Entity Recognition based on BiLSTM-CRF","authors":["G Wiedemann, R Jindal, C Biemann - arXiv preprint arXiv:1811.02902, 2018"],"snippet":"… It is rather determined by the fact that it was trained on several billion tokens of the Common Crawl corpus.2 To evaluate the contribution of different German word … 2http://commoncrawl org 3For fastText, we use the publicly available German model from Bojanowski et al …","url":["https://arxiv.org/pdf/1811.02902"]} +{"year":"2018","title":"Mining Training Data for Language Modeling Across the World's Languages","authors":["M Prasad, T Breiner, D van Esch - 2018"],"snippet":"… 64 Page 5. [16] Common crawlCommon Crawl Foundation. Ac- cessed: June 4, 2018. [Online]. Available: https://www.commoncrawl.org [17] UL Abteilung Automatische Sprachverarbeitung. Corpora collection. Deutscher …","url":["https://www.isca-speech.org/archive/SLTU_2018/pdfs/Manasa.pdf"]} +{"year":"2018","title":"Miracl at Clef 2018: Consumer Health Search Task","authors":["S ZAYANI, N KSENTINI, M TMAR, F GARGOURI - CEUR Workshop Proceedings …, 2018"],"snippet":"… concepts. Page 6. 4 Resources employed 4.1 Datasets The document collection used in CLEFeHealth 2018 is composed of web pages acquired from the CommonCrawl An initial list of websites was identified for acquisition. The …","url":["http://ceur-ws.org/Vol-2125/paper_141.pdf"]} +{"year":"2018","title":"MIsA: Multilingual\" IsA\" Extraction from Corpora","authors":["S Faralli, E Lefever, S Paolo Ponzetto - the Eleventh International Conference on …, 2018"],"snippet":"… 2016). Our MIsA is an extension of the WebIsADb framework (Seitner et al., 2016) - a publicly available database with more than 400 million English hypernymy relations ex- tracted from the CommonCrawl web corpus - where: 1 …","url":["https://biblio.ugent.be/publication/8562721/file/8562722"]} +{"year":"2018","title":"Mitigating Bottlenecks in Wide Area Data Analytics via Machine Learning","authors":["H Wang, B Li - IEEE Transactions on Network Science and …, 2018"],"snippet":"Page 1. 2327-4697 (c) 2018 IEEE. Personal use is permitted, but republication/ redistribution requires IEEE permission. See http://www.ieee.org/ publications_standards/publications/rights/index.html for more information. This …","url":["http://ieeexplore.ieee.org/abstract/document/8319505/"]} +{"year":"2018","title":"Mixture of Expert/Imitator Networks: Scalable Semi-supervised Learning Framework","authors":["S Kiyono, J Suzuki, K Inui - arXiv preprint arXiv:1810.05788, 2018"],"snippet":"… in DNNs has been gaining the performance. For example, several studies have utilized unlabeled 1http://commoncrawl.org arXiv:1810.05788v1 [cs.CL] 13 Oct 2018 Page 2. data as additional training data, which essentially …","url":["https://arxiv.org/pdf/1810.05788"]} +{"year":"2018","title":"Modeling Empathy and Distress in Reaction to News Stories","authors":["S Buechel, A Buffone, B Slaff, L Ungar, J Sedoc - arXiv preprint arXiv:1808.10399, 2018"],"snippet":"… problems. The input to our models is based on word embeddings, namely the publicly available FastText embeddings which were trained on Common Crawl (≈600B tokens) (Bojanowski et al., 2017; Mikolov et al., 2018). Ridge …","url":["https://arxiv.org/pdf/1808.10399"]} +{"year":"2018","title":"Modeling voice calls to improve an outcome of a call between a representative and a customer","authors":["R Raanani, R Levy, MY Breakstone - US Patent App. 16/017,646, 2018","R Raanani, R Levy, MY Breakstone - US Patent App. 16/689,688, 2020"],"snippet":"… At the same time, natural language processing (NLP) approaches to both topic modeling and world-knowledge modeling, have become much more efficient due to the availability of large, freely accessible natural language corpora (eg, CommonCrawl), as well as freely …","url":["https://patentimages.storage.googleapis.com/89/79/27/c037a0d312fd5d/US20180309873A1.pdf","https://patents.google.com/patent/US20200092420A1/en"]} +{"year":"2018","title":"Multi-task Projected Embedding for Igbo","authors":["I Ezeani, M Hepple, I Onyenwe, C Enemuo - International Workshop on Temporal …, 2018"],"snippet":"… igSwproj from same as igWkproj but with subword information. igCrlproj from fastText Common Crawl dataset. Table 3 shows the vocabulary lengths (\\(Vocabs^L\\)), and the dimensions (Dimension) of each of the models used in our experiments …","url":["https://link.springer.com/chapter/10.1007/978-3-030-00794-2_31"]} +{"year":"2018","title":"Multi-turn QA: A RNN Contextual Approach to Intent Classification for Goal-oriented Systems","authors":["M Mensio, G Rizzo, M Morisio - Companion of the The Web Conference 2018 on The …, 2018"],"snippet":"… 8. Christian Buck, Kenneth Heafield, and Bas van Ooyen. 2014. N-gram Counts and Language Models from the Common Crawl Proceedings of the Language Resources and Evaluation Conference (LREC), Vol. Vol. 2. Citeseer, Reykjavik, Iceland, 4 …","url":["https://dl.acm.org/citation.cfm?id=3191539"]} +{"year":"2018","title":"Multilingual word embeddings and their utility in cross-lingual learning","authors":["A Kulmizev - 2018"],"snippet":"… language corpus. When said corpus is large enough (eg Wikipedia, Common Crawl, or the concatenation of the two), the resulting DSM can be assumed to represent the distributional semantics of an entire language. The algorithms …","url":["https://addi.ehu.es/bitstream/handle/10810/29083/TFM_Artur_Kulmizev.pdf?sequence=1"]} +{"year":"2018","title":"Multimodal Language Analysis with Recurrent Multistage Fusion","authors":["PP Liang, Z Liu, A Zadeh, LP Morency - arXiv preprint arXiv:1808.03920, 2018"],"snippet":"Page 1. Multimodal Language Analysis with Recurrent Multistage Fusion Paul Pu Liang1, Ziyin Liu2, Amir Zadeh2, Louis-Philippe Morency2 1Machine Learning Department, 2Language Technologies Institute Carnegie Mellon …","url":["https://arxiv.org/pdf/1808.03920"]} +{"year":"2018","title":"Multimodal Language Analysis with Recurrent Multistage Fusion: Supplementary Material","authors":["PP Liang, Z Liu, A Zadeh, LP Morency"],"snippet":"… 1.1 Multimodal Features Here we present extra details on feature extraction for the language, visual and acoustic modalities. Language: We used 300 dimensional Glove word embeddings trained on 840 billion tokens from …","url":["http://www.cs.cmu.edu/~pliang/papers/emnlp2018-recurrent-fusion-supp.pdf"]} +{"year":"2018","title":"Natural language processing using a neural network","authors":["B McCann, C Xiong, R Socher - US Patent App. 16/000,638, 2018"],"snippet":"… in the second language. In some examples, training of an MT-LSTM of the encoder 310 uses fixed 300-dimensional word vectors, such as the CommonCrawl-840B GloVe model for English word vectors. These word vectors …","url":["https://patentimages.storage.googleapis.com/f0/42/0e/084fa3f0799a39/US20180349359A1.pdf"]} +{"year":"2018","title":"Navigating Online Semantic Resources for Entity Set Expansion","authors":["WT Adrian, M Manna - International Symposium on Practical Aspects of …, 2018"],"snippet":"… the given entry (synset) in BabelNet. WebIsADatabase [25] is a publicly available database containing more than 400 million hypernymy relations extracted from the CommonCrawl web corpus. The tuples of the database are …","url":["https://link.springer.com/chapter/10.1007/978-3-319-73305-0_12"]} +{"year":"2018","title":"Near Human-Level Performance in Grammatical Error Correction with Hybrid Machine Translation","authors":["R Grundkiewicz, M Junczys-Dowmunt - arXiv preprint arXiv:1804.05945, 2018"],"snippet":"… ops). All systems use a 5-gram Language Model (LM) and OSM (Durrani et al., 2011) both estimated from the target side of the training data, and a 5-gram LM and 9-gram WCLM trained on Common Crawl data (Buck et al., 2014) …","url":["https://arxiv.org/pdf/1804.05945"]} +{"year":"2018","title":"Neural Adaptation Layers for Cross-domain Named Entity Recognition","authors":["BY Lin, W Lu - arXiv preprint arXiv:1810.06368, 2018"],"snippet":"… ter stream grab4); 3) general emb, which is pretrained on CommonCrawl containing both formal and user-generated content.5 We obtain the intersection of the top 5K words from source and target vocabularies sorted by frequency to build P1 …","url":["https://arxiv.org/pdf/1810.06368"]} +{"year":"2018","title":"Neural Cross-Lingual Named Entity Recognition with Minimal Resources","authors":["J Xie, Z Yang, G Neubig, NA Smith, J Carbonell - arXiv preprint arXiv:1808.09861, 2018"],"snippet":"Page 1. Neural Cross-Lingual Named Entity Recognition with Minimal Resources Jiateng Xie,1 Zhilin Yang,1 Graham Neubig,1 Noah A. Smith,2,3 and Jaime Carbonell1 1Language Technologies Institute, Carnegie Mellon University …","url":["https://arxiv.org/pdf/1808.09861"]} +{"year":"2018","title":"Neural Fine-Grained Entity Type Classification with Hierarchy-Aware Loss","authors":["P Xu, D Barbosa - arXiv preprint arXiv:1803.03378, 2018"],"snippet":"… appearing in the training set. For this purpose, we used the freely available 300-dimensional cased word embedding trained on 840 billion tokens from the Common Crawl supplied by Pennington et al. (2014). For both datasets, we …","url":["https://arxiv.org/pdf/1803.03378"]} +{"year":"2018","title":"Neural Learning for Question Answering in Italian","authors":["D Croce, A Zelenanska, R Basili - International Conference of the Italian Association …, 2018"],"snippet":"… In [5], both the paragraph and the question encodings use \\(d=128\\) hidden units. The best performing system reported in [5] uses GloVe embeddings [14] trained on Common Crawl of dimension \\(n=300\\) with more than 2 millions of tokens …","url":["https://link.springer.com/chapter/10.1007/978-3-030-03840-3_29"]} +{"year":"2018","title":"Neural Machine Translation with Decoding History Enhanced Attention","authors":["M Wang, J Xie, Z Tan, J Su, D Xiong, C Bian - … of the 27th International Conference on …, 2018"],"snippet":"… translation are presented in Table 2. We compare our NMT systems with various other systems including the winning system in WMT14 (Buck et al., 2014), a phrase-based system whose language models were trained on …","url":["http://www.aclweb.org/anthology/C18-1124"]} +{"year":"2018","title":"Neural models of factuality","authors":["R Rudinger, AS White, B Van Durme - arXiv preprint arXiv:1804.02472, 2018"],"snippet":"Page 1. Neural models of factuality Rachel Rudinger Johns Hopkins University Aaron Steven White University of Rochester Benjamin Van Durme Johns Hopkins University Abstract We present two neural models for event fac …","url":["https://arxiv.org/pdf/1804.02472"]} +{"year":"2018","title":"NEURAL NETWORKS FOR NARRATIVE CONTINUATION","authors":["M Roemmele - 2018"],"snippet":"Page 1. NEURAL NETWORKS FOR NARRATIVE CONTINUATION by Melissa Roemmele A Ph.D. Dissertation Presented to the FACULTY OF THE GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA …","url":["http://people.ict.usc.edu/~roemmele/publications/dissertation.pdf"]} +{"year":"2018","title":"Neural Networks for Semantic Textual Similarity","authors":["DS Prijatelj, J Ventura, J Kalita"],"snippet":"… tors. This specific set of word vectors have 300 dimensions and were pre-trained on 840 billion tokens taken from Common Crawl3. Different pre-trained word vectors may be used in place of this specific pre-trained set. After …","url":["http://cs.uccs.edu/~jkalita/papers/2017/DerekPrijateljICON2017.pdf"]} +{"year":"2018","title":"Neural Quality Estimation of Grammatical Error Correction","authors":["S Chollampatt, HT Ng - Proceedings of the 2018 Conference on Empirical …, 2018"],"snippet":"Page 1. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2528–2539 Brussels, Belgium, October 31 - November 4, 2018. c 2018 Association for Computational Linguistics 2528 …","url":["http://www.aclweb.org/anthology/D18-1274"]} +{"year":"2018","title":"Next Utterance Ranking Based On Context Response Similarity","authors":["BEA Boussaha, N Hernandez, C Jacquin, E Morin"],"snippet":"… E. Parameter tuning Word embeddings were initialized with Glove [21] pretrained on Common Crawl Corpus4 then fine tuned during training5 … 4http://commoncrawl.org/the-data/ 5Note that we trained word embeddings …","url":["https://basma-b.github.io/assets/publications/nextUtterance/paper.pdf"]} +{"year":"2018","title":"NICT's Neural and Statistical Machine Translation Systems for the WMT18 News Translation Task","authors":["B Marie, R Wang, A Fujita, M Utiyama, E Sumita - arXiv preprint arXiv:1809.07037, 2018"],"snippet":"… As English monolingual data, we used all the available data except the “Common Crawl” and “News Discussions” corpora.2 For all other languages, we used all the available monolingual corpora, except for Turkish for which we …","url":["https://arxiv.org/pdf/1809.07037"]} +{"year":"2018","title":"NL-FIIT at IEST-2018: Emotion Recognition utilizing Neural Networks and Multi-level Preprocessing","authors":["S Pecar, M Farkaš, M Simko, P Lacko, M Bielikova - Proceedings of the 9th Workshop …, 2018"],"snippet":"… input words, we used various pre-trained word embeddings available online like GloVe em- beddings1. With GloVe embeddings, we experimented with domain specific embeddings trained on Twitter data but also with embeddings …","url":["http://www.aclweb.org/anthology/W18-6231"]} +{"year":"2018","title":"Noising and Denoising Natural Language: Diverse Backtranslation for Grammar Correction","authors":["Z Xie, G Genthial, S Xie, A Ng, D Jurafsky - Proceedings of the 2018 Conference of …, 2018"],"snippet":"… h) (Wu et al., 2016). The final scoring function also incorporates a 5-gram language model trained on a subset of Common Crawl, estimated with Kneser-Ney smoothing using KenLM (Heafield, 2011). We in- corporate the …","url":["http://www.aclweb.org/anthology/N18-1057"]} +{"year":"2018","title":"Not just about size-A Study on the Role of Distributed Word Representations in the Analysis of Scientific Publications","authors":["A Garcia, JM Gomez-Perez - arXiv preprint arXiv:1804.01772, 2018","A Garcia-Silva, JM Gomez-Perez"],"snippet":"… our evaluation task, embeddings from a scientific publication corpus consistently generate classifiers with a top performance that is only matched by classifiers learned from embeddings from very large document corpora …","url":["http://ceur-ws.org/Vol-2106/paper3.pdf","https://arxiv.org/pdf/1804.01772"]} +{"year":"2018","title":"Novelty Goes Deep. A Deep Neural Solution To Document Level Novelty Detection","authors":["T Ghosal, V Edithal, A Ekbal, P Bhattacharyya…"],"snippet":"… Training is done with a mini-batch size of 64. GloVe vectors trained on Common Crawl 840B (https://nlp.stanford.edu/projects/glove/) with 300 dimensions are used as fixed word embeddings. 7SNLI has 3 classes of …","url":["https://www.cse.iitb.ac.in/~pb/papers/coling18-novelty.pdf"]} +{"year":"2018","title":"NTT's Neural Machine Translation Systems for WMT 2018","authors":["M Morishita, J Suzuki, M Nagata - Proceedings of the Third Conference on Machine …, 2018"],"snippet":"… This year's submission includes the following features: • Noisy data filtering for Common Crawl and ParaCrawl corpora (Section 3.1) … This year, ParaCrawl and Common Crawl corpora, which were created by crawling parallel websites, were provided for training …","url":["http://www.aclweb.org/anthology/W18-6421"]} +{"year":"2018","title":"NTU NLP Lab System at SemEval-2018 Task 10: Verifying Semantic Differences by Integrating Distributional Information and Expert Knowledge","authors":["YT Shiue, HH Huang, HH Chen - Proceedings of The 12th International Workshop on …, 2018"],"snippet":"… al., 2017). We use the pre-trained embeddings of English concepts3. 4. GloVe(Common Crawl): The GloVe model (Pennington et al., 2014) obtains word representation according to global co-occurrence statistics. We use …","url":["http://www.aclweb.org/anthology/S18-1171"]} +{"year":"2018","title":"On Link Stability Detection for Online Social Networks","authors":["J Zhang, X Tao, L Tan, JCW Lin, H Li, L Chang - International Conference on …, 2018"],"snippet":"… Since the social network we obtain from the repositories of common crawl contains missing links and partial information, stochastic estimations are used to measure the accuracy and reliability of our experimental MVVA results [12] …","url":["https://link.springer.com/chapter/10.1007/978-3-319-98809-2_20"]} +{"year":"2018","title":"On NDN and (“lack of”) Measurement","authors":["T Silverston"],"snippet":"… URLs Dataset • Web Crawling of 7 main organizations – Amazon, Ask, Stackoverflow, BBC, CNN, Google, Yahoo – Common Crawl Data Set repository • 1.73B URLs -> 7M for each organization /(Organization) …","url":["https://pdfs.semanticscholar.org/presentation/08d7/0e3f0b27b03a5f99f22bfeebeafa47c9bbb7.pdf"]} +{"year":"2018","title":"On the Compressed Sensing Properties of Word Embeddings","authors":["M Khodak - 2018"],"snippet":"… word2vec embeddings trained on Google News and GloVe vectors trained on Common Crawl were obtained from public repositories [20, 23] while Amazon and Wikipedia embeddings were trained for 100 iterations using …","url":["ftp://ftp.cs.princeton.edu/techreports/2018/008.pdf"]} +{"year":"2018","title":"On the Design and Tuning of Machine Learning Models for Language Toxicity Classification in Online Platforms","authors":["M Rybinski, W Miller, J Del Ser, MN Bilbao… - International Symposium on …, 2018"],"snippet":"… For (B) we have used vectors pre-trained on Common Crawl corpus with GloVe algorithm [9]. A more complete description of the text representations involved in our experiments is given below … For pre-trained embeddings …","url":["https://link.springer.com/chapter/10.1007/978-3-319-99626-4_29"]} +{"year":"2018","title":"Ontology Augmentation Through Matching with Web Tables","authors":["O Lehmberg, O Hassanzadeh"],"snippet":"… 3 http://commoncrawl.org … The used PageRank values are obtained from the publicly available Common Crawl WWW Ranking.4 For each partition of columns, we use the maximum PageRank of all source web pages and …","url":["http://disi.unitn.it/~pavel/om2018/papers/om2018_LTpaper4.pdf"]} +{"year":"2018","title":"Ontology Driven Extraction of Research Processes","authors":["V Pertsas, P Constantopoulos, I Androutsopoulos"],"snippet":"… Our experiments with other general-purpose, publicly available embeddings, such as those trained on the Common Crawl corpus using GloVe9, or those trained on Wikipedia articles with word2vec, showed inferior performance …","url":["http://www2.aueb.gr/users/ion/docs/iswc2018.pdf"]} +{"year":"2018","title":"Open Bibliometrics and Undiscovered Public Knowledge","authors":["D Stuart - Online Information Review, 2018"],"snippet":"… Whether altmetrics is really any more open than traditional citation analysis is a matter of debate, although services such as Common Crawl (http://commoncrawl.org), an open repository of web crawl data, provides …","url":["https://www.emeraldinsight.com/doi/abs/10.1108/OIR-07-2017-0209"]} +{"year":"2018","title":"OpenSeq2Seq: extensible toolkit for distributed and mixed precision training of sequence-to-sequence models","authors":["O Kuchaiev, B Ginsburg, I Gitman, V Lavrukhin, C Case… - arXiv preprint arXiv …, 2018"],"snippet":"… training). In our experiments, we used WMT 2016 English→German data set obtained by combining the Europarlv7, News Commentary v10, and Common Crawl corpora and resulting in roughly 4.5 million sentence pairs …","url":["https://arxiv.org/pdf/1805.10387"]} +{"year":"2018","title":"Optimizing Automatic Evaluation of Machine Translation with the ListMLE Approach","authors":["M Li, M Wang - ACM Transactions on Asian and Low-Resource …, 2018"],"snippet":"… Bilingual parallel data comprising Europarl v7, Common Crawl corpus, and News Commentary v10, released by the WMT'2015 Machine Translation Shared Task [39] were employed to train the bidirectional lexical translation probability …","url":["https://dl.acm.org/citation.cfm?id=3226045"]} +{"year":"2018","title":"Out-of-Distribution Detection using Multiple Semantic Label Representations","authors":["G Shalev, Y Adi, J Keshet - arXiv preprint arXiv:1808.06664, 2018"],"snippet":"… respectively. The third and forth representations were based on GloVe [36], where the third one was trained using both Wikipedia corpus and Gigawords [34] dataset, the fourth was trained using Common Crawl dataset. The …","url":["https://arxiv.org/pdf/1808.06664"]} +{"year":"2018","title":"Pangloss: Fast Entity Linking in Noisy Text Environments","authors":["M Conover, M Hayes, S Blackburn, P Skomoroch… - arXiv preprint arXiv …, 2018"],"snippet":"… For each surface form Pangloss calls a RocksDB key-value store to retrieve candidate entries (represented by circles) based on associations between hyperlink anchor text and Wikipedia URLs in Wikipedia and Common Crawl (Section 3.5) … 3.3.4 Common Crawl …","url":["https://arxiv.org/pdf/1807.06036"]} +{"year":"2018","title":"ParaNMT-50M: Pushing the Limits of Paraphrastic Sentence Embeddings with Millions of Machine Translations","authors":["J Wieting, K Gimpel - Proceedings of the 56th Annual Meeting of the …, 2018"],"snippet":"… Dataset Avg. Length Avg. IDF Avg. Para. Score Vocab. Entropy Parse Entropy Total Size Common Crawl 24.0±34.7 7.7±1.1 0.83±0.16 7.2 3.5 0.16M … (2017). Its training data includes four sources: Common Crawl, CzEng 1.6 (Bojar …","url":["http://www.aclweb.org/anthology/P18-1042"]} +{"year":"2018","title":"Periodizing Web Archiving: Biographical, Event-Based, National and Autobiographical Traditions","authors":["R Rogers"],"snippet":"Page 1. Periodizing Web Archiving: Biographical, Event-Based, National and Autobiographical Traditions Richard Rogers INTRODUCTION: HISTORIOGRAPHIES BUILT INTO WEB ARCHIVES The purpose of this chapter is …","url":["https://www.researchgate.net/profile/Richard_Rogers13/publication/327403018_Periodizing_Web_Archiving_Biographical_Event-Based_National_and_Autobiographical_Traditions/links/5b8d511e299bf114b7eeea4e/Periodizing-Web-Archiving-Biographical-Event-Based-National-and-Autobiographical-Traditions.pdf"]} +{"year":"2018","title":"Phrase-Level Metaphor Identification using Distributed Representations of Word Meaning","authors":["O Zayed, JP McCrae, P Buitelaar - NAACL HLT 2018, 2018"],"snippet":"… 10. –GloVe Common Crawl5: We used a pretrained model on the Common Crawl dataset containing 840 billion tokens of web data (about 2 million words). The vectors are 300dimensional using 100 training iteration. For …","url":["http://www.cl.cam.ac.uk/~es407/papers/Fig-Lang2018-proceedings.pdf#page=93"]} +{"year":"2018","title":"Phrase-level Self-Attention Networks for Universal Sentence Encoding","authors":["W Wu, H Wang, T Liu, S Ma - Proceedings of the 2018 Conference on Empirical …, 2018"],"snippet":"… and sentence textual similarity. 3.1 Model Configuration 300-dimensional GloVe (Pennington et al., 2014) word embeddings (Common Crawl, uncased) are used to represent words. Following Parikh et al. (2016), out-of-vocabulary …","url":["http://www.aclweb.org/anthology/D18-1408"]} +{"year":"2018","title":"Platypus–A Multilingual Question Answering Platform for Wikidata","authors":["TP Tanon, MD de Assunçao, E Caron, FM Suchanek"],"snippet":"… The template analyzer is implemented using RasaNLU [34]. We used the Glove [35] word vectors trained on Common Crawl provided by Spacy and the RasaNLU entity extractor based on the CRFsuite library [36]. Our system can be accessed in three ways …","url":["https://2018.eswc-conferences.org/wp-content/uploads/2018/02/ESWC2018_paper_130.pdf"]} +{"year":"2018","title":"Pointer-CNN for Visual Question Answering","authors":["J Svidt, JS Jepsen - 2018"],"snippet":"Page 1. Pointer-CNN for Visual Question Answering Jakob Svidt Aalborg University jsvidt13@student.aau.dk Jens Søholm Jepsen Aalborg University jjepse12@student. aau June 14, 2018 Abstract Visual Question Answering …","url":["https://projekter.aau.dk/projekter/files/281551620/SvidtJepsen.pdf"]} +{"year":"2018","title":"Pooling Word Vector Representations Across Models","authors":["AC Graesser, V Rus - … Linguistics and Intelligent Text Processing: 18th …"],"snippet":"… The model was trained on non-zero elements in a global word co-occurrence matrix. We used the pre-trained model GloVe-42B which was trained on 42 billion words of Common Crawl corpus and it contains about 1.9 million unique tokens …","url":["http://books.google.de/books?hl=en&lr=lang_en&id=dDNyDwAAQBAJ&oi=fnd&pg=PA17&dq=commoncrawl&ots=qmDDIrUYu6&sig=KN6CvMWGvERnEsXe5NqWJlRnqnM"]} +{"year":"2018","title":"Predicting and Generating Discussion Inspiring","authors":["YJWJ Park"],"snippet":"… response time before being presented to the human. • An LSTM model [2] [4] using pretrained 300 dimensional GloVe word embeddings [5] on Common Crawl to embed the comments. A self-attention layer was added on top …","url":["http://web.stanford.edu/class/cs224n/reports/6879446.pdf"]} +{"year":"2018","title":"Predicting Company Ratings through Glassdoor","authors":["TE Whittle"],"snippet":"… This dataset has 300-dimnesional vectors for 3 million words and phrases. The GloVe pre-trained embeddings come from a model trained on 42 billion tokens encountered by Common Crawl, a program designed to crawl the web and extract text …","url":["http://web.stanford.edu/class/cs224n/reports/6880837.pdf"]} +{"year":"2018","title":"Predictive Embeddings for Hate Speech Detection on Twitter","authors":["R Kshirsagar, T Cukuvac, K McKeown, S McGregor - arXiv preprint arXiv:1809.10644, 2018"],"snippet":"… 5 Experimental Setup We tokenize the data using Spacy (HonnibalandJohnson, 2015). We use 300 Dimensional Glove Common Crawl Embeddings (840B Token) (Pennington et al., 2014) and fine tune them for the task. We …","url":["https://arxiv.org/pdf/1809.10644"]} +{"year":"2018","title":"Preferred Answer Selection in Stack Overflow: Better Text Representations... and Metadata, Metadata, Metadata","authors":["X Xu, A Bennett, D Hoogeveen, JH Lau, T Baldwin"],"snippet":"… Word embeddings were set to 150 di- mensions. The co-occurrence weighting function's maximum value xmax was kept at the default of 10. For SEMEVAL, we used pretrained Common Crawl cased embeddings with 840G tokens …","url":["http://noisy-text.github.io/2018/pdf/W-NUT201819.pdf"]} +{"year":"2018","title":"Preserved Structure Across Vector Space Representations","authors":["A Amatuni, E He, E Bergelson - arXiv preprint arXiv:1802.00840, 2018"],"snippet":"… We use the set of vectors pretrained by the GloVe authors on the Common Crawl corpus with 42 billion tokens, resulting in 300 dimensional vectors for 1.9 million unique words1. Such vectors have shown promise in modeling early semantic networks (Amatuni & Bergelson …","url":["https://arxiv.org/pdf/1802.00840"]} +{"year":"2018","title":"Probabilistic German Morphosyntax","authors":["R Schäfer"],"snippet":"Page 1. Probabilistic German Morphosyntax HABILITATIONSSCHRIFT zur Erlangung der Lehrbefähigung für das Fach Germanistische und Allgemeine Sprachwissenschaft vorgelegt der Philosophischen Fakultät II der …","url":["http://rolandschaefer.net/wp-content/uploads/RolandSchaefer_2018_ProbabilisticGermanMorphosyntax_Habil_DRAFT.pdf"]} +{"year":"2018","title":"PROMT Systems for WMT 2018 Shared Translation Task","authors":["A Molchanov - Proceedings of the Third Conference on Machine …, 2018"],"snippet":"… The CommonCrawl and (especially) ParaCrawl corpora were heavily filtered and normalized using the PROMT tools and algorithms (including language recognition, removal of meaningless sentences, in-house tools for parallel …","url":["http://www.aclweb.org/anthology/W18-6420"]} +{"year":"2018","title":"Pseudo Descriptions for Meta-Data Retrieval","authors":["T Gollub, E Genc, N Lipka, B Stein - Proceedings of the 2018 ACM SIGIR International …, 2018"],"snippet":"… As reference collections, the TREC collections themselves as well as Wikipedia and the CommonCrawl are used … 2Not considered were lists, disambiguations, and short (#words < 250) articles. 3http://commoncrawl …","url":["https://dl.acm.org/citation.cfm?id=3234957"]} +{"year":"2018","title":"QED: A fact verification system for the FEVER shared task","authors":["J Luken, N Jiang, MC de Marneffe - Proceedings of the First Workshop on Fact …, 2018"],"snippet":"… described below. 4.2 Embedding We used GloVe word embeddings (Pennington et al., 2014) with 300 dimensions pre-trained us- ing CommonCrawl to get a vector representation of the evidence sentence. We also experimented …","url":["http://www.aclweb.org/anthology/W18-5526"]} +{"year":"2018","title":"Quantifying macroeconomic expectations in stock markets using Google Trends","authors":["J Bock - arXiv preprint arXiv:1805.00268, 2018"],"snippet":"… trends.google.com/trends/, March 5, 2018. 3 Common Crawl (42B tokens) GloVe word embeddings, retrieved from Stanford University, https://nlp.stanford.edu/projects/ glove/, March 4, 2018. 4 GloVe word embeddings are vector …","url":["https://arxiv.org/pdf/1805.00268"]} +{"year":"2018","title":"Quantitative Web History Methods","authors":["A Cocciolo - The SAGE Handbook of Web History, 2018"]} +{"year":"2018","title":"Quantum-like Generalization of Complex Word Embedding: a lightweight approach for textual classification","authors":["H Liu"],"snippet":"… (B) - crawl-300d-2M-vec ‡ 2 Million 600 Billion (C) - GloVe.Common Crawl.840B.300d † 2.2 Million 840 Billion Table 1. The pre-trained word embedding models selected for this experiment, where †= GloVe algorithm embeddings, ‡= Fasttext algorithm embeddings …","url":["http://ceur-ws.org/Vol-2191/paper19.pdf"]} +{"year":"2018","title":"Quester: A Speech-Based Question Answering Support System for Oral Presentations","authors":["R Asadi, H Trinh, HJ Fell, TW Bickmore - … of the 23rd International Conference on …, 2018"],"snippet":"… ( , ) = √ 2 =0 (2) ( ) is the word vector representation of keyword k. We used a pre-trained GloVe [14] vector representation with 1.9 million uncased words and vectors with 300 elements, trained using 42 billion tokens of web data from Common Crawl …","url":["http://relationalagents.com/publications/IUI18.pdf"]} +{"year":"2018","title":"Question Answering on SQuAD Dataset","authors":["ZDJ Dong, J Geng"],"snippet":"… We use the 300-dimensional case-insensitive Common Crawl GloVe word embeddings [7], and do not retrain the embeddings during training … We believe this is mainly because the Common Crawl version has a much larger …","url":["http://web.stanford.edu/class/cs224n/reports/6878267.pdf"]} +{"year":"2018","title":"Question Answering System with Question Type Modelling","authors":["K Ponomareva"],"snippet":"… and bug fixing. It might be also useful to consider additional training data sets with more variety of question types present and using a larger pretrained word vectors set, such as CommonCrawl.840B.300d. Acknowledgments I …","url":["http://web.stanford.edu/class/cs224n/reports/6904810.pdf"]} +{"year":"2018","title":"Ranking Documents by Answer-Passage Quality","authors":["E Yulianti, RC Chen, F Scholer, WB Croft, M Sanderson - 2018"],"snippet":"… We use the same set of word embeddings learned from the Y!A data (as with EmbYA), but the effectiveness is roughly comparable to a pre-trained model learned on the Common Crawl data [9]. For all methods tested in …","url":["http://rueycheng.com/paper/answer-passages.pdf"]} +{"year":"2018","title":"Reaching Human-level Performance in Automatic Grammatical Error Correction: An Empirical Study","authors":["T Ge, F Wei, M Zhou - arXiv preprint arXiv:1807.01270, 2018"],"snippet":"Page 1. Microsoft Research Technical Report REACHING HUMAN-LEVEL PERFORMANCE IN AUTOMATIC GRAMMATICAL ERROR CORRECTION: AN EMPIRICAL STUDY Tao Ge, Furu Wei, Ming Zhou Natural Language …","url":["https://arxiv.org/pdf/1807.01270"]} +{"year":"2018","title":"Reasoning with Sarcasm by Reading In-between","authors":["Y Tay, LA Tuan, SC Hui, J Su - arXiv preprint arXiv:1805.02856, 2018"],"snippet":"… We use the GloVe model trained on 2B Tweets for the Tweets and Reddit dataset. The Glove model trained on Common Crawl is used for the Debates corpus. The size of the word em- beddings is fixed at d = 100 and are fine-tuned during training …","url":["https://arxiv.org/pdf/1805.02856"]} +{"year":"2018","title":"Recommendation System Developer","authors":["E Nandini, J Neal, T Olson, C Prater-Lee"],"snippet":"… This matrix is mapped to a vector space; GloVe uses a least squares regression model to minimize the dot product between word vectors. spaCy trains GloVe on Common Crawl corpus by default, which offers free web page data …","url":["https://taylorlolson.com/docs/recommendation-system-developer.pdf"]} +{"year":"2018","title":"Refining Word Embeddings Using Intensity Scores for Sentiment Analysis","authors":["LC Yu, J Wang, KR Lai, X Zhang"],"snippet":"Page 1. 2329-9290 (c) 2017 IEEE. Personal use is permitted, but republication/ redistribution requires IEEE permission. See http://www.ieee.org/ publications_standards/publications/rights/index.html for more information. This …","url":["https://www.researchgate.net/profile/Jin_Wang115/publication/322143064_Refining_Word_Embeddings_Using_Intensity_Scores_for_Sentiment_Analysis/links/5a4d9ad50f7e9b8284c4e442/Refining-Word-Embeddings-Using-Intensity-Scores-for-Sentiment-Analysis.pdf"]} +{"year":"2018","title":"Regularized Training Objective for Continued Training for Domain Adaptation in Neural Machine Translation","authors":["H Khayrallah, B Thompson, K Duh, P Koehn - Proceedings of the 2nd Workshop on …, 2018"],"snippet":"… large, out-of-domain corpus we utilize bi- text from WMT2017 (Bojar et al., 2017),4 which contains data from several sources: Europarl parliamentary proceedings (Koehn, 2005),5 News Commentary (political and economic …","url":["http://www.aclweb.org/anthology/W18-2705"]} +{"year":"2018","title":"Report on the Third Quality Translation Shared Task","authors":["P Williams, P Koehn, O Bojar, T Kocmi, L Specia… - 2018"],"snippet":"Page 1. This document is part of the Research and Innovation Action “Quality Translation 21 (QT21)”. This project has received funding from the European Union's Horizon 2020 program for ICT under grant agreement no. 645452 …","url":["http://www.qt21.eu/wp-content/uploads/2018/08/QT21-D4.3.pdf"]} +{"year":"2018","title":"Representativeness of Latent Dirichlet Allocation Topics Estimated from Data Samples with Application to Common Crawl","authors":["Y Du, A Herzog, A Luckow, R Nerella, C Gropp, A Apon"],"snippet":"Abstract—Common Crawl is a massive multi-petabyte dataset hosted by Amazon. It contains archived HTML web page data from 2008 to date. Common Crawl has been widely used for text mining purposes. Using data extracted from Common Crawl has several advantages","url":["https://www.researchgate.net/profile/Yuheng_Du/publication/322512712_Representativeness_of_latent_dirichlet_allocation_topics_estimated_from_data_samples_with_application_to_common_crawl/links/5a67b4980f7e9b76ea8f086e/Representativeness-of-latent-dirichlet-allocation-topics-estimated-from-data-samples-with-application-to-common-crawl.pdf"]} +{"year":"2018","title":"Reproducible Web Corpora: Interactive Archiving with Automatic Quality Assessment","authors":["J KIESEL, F KNEIST, M ALSHOMARY, B STEIN… - 2018"],"snippet":"… since disappeared. The Common Crawl [36] is also missing many of such resources. Other … the web. As a population of web pages to draw a sample from, we resort to the recent billion-page Common Crawl 2017-04 [36]. From …","url":["https://webis.de/downloads/publications/papers/stein_2018q.pdf"]} +{"year":"2018","title":"Research Data Management","authors":["S Kühne"],"snippet":"… the web as graph data - 886 m. nodes (07/2018) - 5.4 bn. edges (07/2018) THE COMMON CRAWL ARCHIVE, http://commoncrawl.org Some research questions - prevalence of Web advertising - etymologies of words …","url":["https://www.ral.uni-leipzig.de/fileadmin/user_upload/dokumente/Schleyer/Kuehne_Introduction.pdf"]} +{"year":"2018","title":"Research Frontiers in Information Retrieval Report from the Third Strategic Workshop on Information Retrieval in Lorne (SWIRL 2018)","authors":["JS Culpepper, F Diaz, MD Smucker"],"snippet":"Page 1. WORKSHOP REPORT Research Frontiers in Information Retrieval Report from the Third Strategic Workshop on Information Retrieval in Lorne (SWIRL 2018) Editors J. Shane Culpepper, Fernando Diaz, and Mark D. Smucker …","url":["http://www.damianospina.com/wp-content/uploads/2018/04/swirl3-report.pdf"]} +{"year":"2018","title":"RI-Match: Integrating Both Representations and Interactions for Deep Semantic Matching","authors":["L Chen, Y Lan, L Pang, J Guo, J Xu, X Cheng - Asia Information Retrieval Symposium, 2018"],"snippet":"… First, we introduce our experimental settings, including parameter setting, and evaluation metrics. Parameter Settings. We initialize word embeddings in the word embedding layer with 300-dimensional Glove word vectors pre-trained in the 840B Common Crawl corpus …","url":["https://link.springer.com/chapter/10.1007/978-3-030-03520-4_9"]} +{"year":"2018","title":"Rigging Research Results by Manipulating Top Websites Rankings","authors":["VL Pochat, T Van Goethem, W Joosen - arXiv preprint arXiv:1806.01156, 2018"],"snippet":"Page 1. Rigging Research Results by Manipulating Top Websites Rankings Victor Le Pochat, Tom Van Goethem and Wouter Joosen imec-DistriNet, KU Leuven 3001 Leuven, Belgium Email: firstname.lastname@cs.kuleuven.be …","url":["https://arxiv.org/pdf/1806.01156"]} +{"year":"2018","title":"Risk Analysis of Information-Leakage Through Interest Packets in NDN","authors":["D Kondo, T Silverston, H Tode, T Asami, O Perrin"],"snippet":"… We collected URLs from the data repository provided by Common Crawl and we evaluate the performances of our per-packet filters … All the URLs in our data set provided by Common Crawl did not necessarily have part as defined in RFC 1808 …","url":["http://ieeexplore.ieee.org/iel7/8106907/8116300/08116403.pdf"]} +{"year":"2018","title":"Robots Learn Social Skills: End-to-End Learning of Co-Speech Gesture Generation for Humanoid Robots","authors":["Y Yoon, WR Ko, M Jang, J Lee, J Kim, G Lee - arXiv preprint arXiv:1810.12541, 2018"],"snippet":"… In the space of word embedding, words of similar meaning have similar representations, so understanding natural language is easier. We used the pretrained word embedding model GloVe, trained on the Common Crawl corpus [21] …","url":["https://arxiv.org/pdf/1810.12541"]} +{"year":"2018","title":"S3BD: Secure Semantic Search over Encrypted Big Data in the Cloud","authors":["J Woodworth, MA Salehi - arXiv preprint arXiv:1809.07927, 2018"],"snippet":"Page 1. CONCURRENCY AND COMPUTATION: PRACTICE AND EXPERIENCE Concurrency Computat.: Pract. Exper. 0000; 00:1–22 Published online in Wiley InterScience (www.interscience.wiley.com). DOI: 10.1002/cpe …","url":["https://arxiv.org/pdf/1809.07927"]} +{"year":"2018","title":"Sandpiper: Scaling Probabilistic Inferencing to Large Scale Graphical Models","authors":["A Ulanov, M Marwah, M Kim, R Dathathri, C Zubieta…"],"snippet":"… It is the hyperlinked graph obtained from a web crawl conducted by Common Crawl in August 2012 [32] … We used a real, large-scale web graph for this use case. We derived it from the web graph available at Web Data Commons [30], based on the Common Crawl data [32] …","url":["http://marwah.org/publications/papers/bigdata2017.pdf"]} +{"year":"2018","title":"Scheduled Multi-Task Learning: From Syntax to Translation","authors":["E Kiperwasser, M Ballesteros - arXiv preprint arXiv:1804.08915, 2018"],"snippet":"Page 1. Scheduled Multi-Task Learning: From Syntax to Translation Eliyahu Kiperwasser∗ Computer Science Department Bar-Ilan University Ramat-Gan, Israel elikip@gmail.com Miguel Ballesteros IBM Research 1101 …","url":["https://arxiv.org/pdf/1804.08915"]} +{"year":"2018","title":"SDC: structured data collection by yourself","authors":["T Ohshima, M Toyama - Proceedings of the 8th International Conference on …, 2018"],"snippet":"… PVLDB 1, 1 (2008), 538--549. http://www.vldb.org/pvldb/171453916.pdf. 2. Common Crawl [nd]. Common Crawl. http://commoncrawl.org/. ([nd]). 3. DataHub - Frictionless Data [nd]. DataHub - Frictionless Data. http://datahub.io/. ([nd]) …","url":["https://dl.acm.org/citation.cfm?id=3200849"]} +{"year":"2018","title":"Searching Arguments in German with ArgumenText","authors":["C Stahlhut"],"snippet":"… for arguments on a topic such as “nuclear energy”, it first retrieves relevant documents via Elasticsearch from a large collection of documents, such as Common Crawl2. In the … 1A demonstrator is publicly available at …","url":["http://desires.dei.unipd.it/papers/paper20.pdf"]} +{"year":"2018","title":"Searching for the X-Factor: Exploring Corpus Subjectivity for Word Embeddings","authors":["M Tkachenko, CC Chia, H Lauw - Proceedings of the 56th Annual Meeting of the …, 2018"],"snippet":"… embeddings. The previous preoccupation centers around corpus size, ie, a larger corpus is perceived to be richer in statistical information. For instance, popular corpora include Wikipedia, Common Crawl, and Google News. We …","url":["http://www.aclweb.org/anthology/P18-1112"]} +{"year":"2018","title":"SEGBOT: A Generic Neural Text Segmentation Model with Pointer Network","authors":["J Li, A Sun, S Joty - IJCAI. Under Review, 2018"],"snippet":"Page 1. SEGBOT: A Generic Neural Text Segmentation Model with Pointer Network Jing Li, Aixin Sun and Shafiq Joty School of Computer Science and Engineering, Nanyang Technological University, Singapore …","url":["https://www.researchgate.net/profile/Aixin_Sun/publication/325168535_SEGBOT_A_Generic_Neural_Text_Segmentation_Model_with_Pointer_Network/links/5afb9f710f7e9b3b0bf2a964/SEGBOT-A-Generic-Neural-Text-Segmentation-Model-with-Pointer-Network.pdf"]} +{"year":"2018","title":"Semantic Term\" Blurring\" and Stochastic\" Barcoding\" for Improved Unsupervised Text Classification","authors":["RF Martorano III - arXiv preprint arXiv:1811.02456, 2018"],"snippet":"… trained on millions of documents from Google News. In the case of later models, they have trained on common crawl, a dataset of billions of web pages. The high level intuition with these models, is that terms used in similar contexts, likely have similar semantics …","url":["https://arxiv.org/pdf/1811.02456"]} +{"year":"2018","title":"Semi-Supervised Neural System for Tagging, Parsing and Lemmatization","authors":["P Rybak, A Wróblewska - CoNLL 2018, 2018"],"snippet":"… For Uyghur language only 3M words are available. The provided data sets come either from Wikipedia or Commom Crawl. Where it is possible we choose the sentences from Common Crawl, due to longer (on average) sentence sizes …","url":["http://universaldependencies.org/conll18/proceedings/K18-2.pdf#page=53"]} +{"year":"2018","title":"Sentence Classification for Investment Rules Detection","authors":["Y Mansar, S Ferradans - Proceedings of the First Workshop on Economics and …, 2018"],"snippet":"… embedding. This is justified by the fact that some words used in prospectuses are uncommon in the general use of language and thus are not included in available word vectors pre-trained on Wikipedia or common crawl alone …","url":["http://www.aclweb.org/anthology/W18-3106"]} +{"year":"2018","title":"Sentence Encoding with Tree-constrained Relation Networks","authors":["L Yu, CM d'Autume, C Dyer, P Blunsom, L Kong… - arXiv preprint arXiv …, 2018"],"snippet":"Page 1. SENTENCE ENCODING WITH TREE-CONSTRAINED RE- LATION NETWORKS Lei Yu Cyprien de Masson d'Autume Chris Dyer Phil Blunsom Lingpeng Kong Wang Ling DeepMind {leiyu, cyprien, cdyer, pblunsom, lingpenk, lingwang}@google.com ABSTRACT …","url":["https://arxiv.org/pdf/1811.10475"]} +{"year":"2018","title":"Sentence Modeling via Multiple Word Embeddings and Multi-level Comparison for Semantic Textual Similarity","authors":["HN Tien, MN Le, Y Tomohiro, I Tatsuya - arXiv preprint arXiv:1805.07882, 2018"],"snippet":"… The em- bedding representations in fastText are 300dimensional vectors. • GloVe is a 300-dimensional word embedding model learned on aggregated global wordword co-occurrence statistics from Common Crawl (840 billion tokens) …","url":["https://arxiv.org/pdf/1805.07882"]} +{"year":"2018","title":"Sentence Selection and Weighting for Neural Machine Translation Domain Adaptation","authors":["R Wang, M Utiyama, A Finch, L Liu, K Chen, E Sumita - IEEE/ACM Transactions on …, 2018"],"snippet":"Page 1. 2329-9290 (c) 2018 IEEE. Personal use is permitted, but republication/ redistribution requires IEEE permission. See http://www.ieee.org/ publications_standards/publications/rights/index.html for more information. This …","url":["https://ieeexplore.ieee.org/abstract/document/8360031/"]} +{"year":"2018","title":"Sentence Similarity Learning Method based on Attention Hybrid Model","authors":["Y Wang, X Di, J Li, H Yang, L Bi - Journal of Physics: Conference Series, 2018"],"snippet":"… with our method. 4.3. Experimental Setup We initialize word representation in the word embedding layer with the 300-dimensional GloVe word vectors pre-trained from the Common Crawl Corpus [18]. Embeddings for words …","url":["http://iopscience.iop.org/article/10.1088/1742-6596/1069/1/012119/pdf"]} +{"year":"2018","title":"Sentence Simplification with Memory-Augmented Neural Networks","authors":["T Vu, B Hu, T Munkhdalai, H Yu - arXiv preprint arXiv:1804.07445, 2018"],"snippet":"… We used the same hyperparameters across all datasets. Word embeddings were initialized either randomly or with Glove vectors (Pennington et al., 2014) pre-trained on Common Crawl data (840B tokens), and fine-tuned during training …","url":["https://arxiv.org/pdf/1804.07445"]} +{"year":"2018","title":"SentEval: An Evaluation Toolkit for Universal Sentence Representations","authors":["A Conneau, D Kiela - arXiv preprint arXiv:1803.05449, 2018"],"snippet":"… Continuous bag-of-words embeddings (average of word vectors). We consider the most commonly used pretrained word vectors available, namely the fastText (Mikolov et al., 2017) and the GloVe (Pennington et al., 2014) vectors trained on CommonCrawl …","url":["https://arxiv.org/pdf/1803.05449"]} +{"year":"2018","title":"Sentiment Bias in Predictive Text Recommendations Results in Biased Writing","authors":["KC Arnold, K Chauncey, KZ Gajos"],"snippet":"… lending, or law enforcement—if the data sets used to train the algorithms are bi- ased [2]. Such biased data sets are more common than initially suspected: Recent work demonstrated that two popular text corpora, the …","url":["https://www.eecs.harvard.edu/~kgajos/papers/2018/arnold18sentiment.pdf"]} +{"year":"2018","title":"Sentiment Expression Boundaries in Sentiment Polarity Classification","authors":["R Kaljahi, J Foster - Proceedings of the 9th Workshop on Computational …, 2018"],"snippet":"… The input layer for these systems is the concatenation of an embedding layer, which uses pre-trained GloVe (Pennington et al., 2014) word embeddings 2 (1.9M vocabulary Common Crawl), concatenated with a one-hot vector …","url":["http://www.aclweb.org/anthology/W18-6222"]} +{"year":"2018","title":"Seq2Sick: Evaluating the Robustness of Sequence-to-Sequence Models with Adversarial Examples","authors":["M Cheng, J Yi, H Zhang, PY Chen, CJ Hsieh - arXiv preprint arXiv:1803.01128, 2018"],"snippet":"… one is trained from scratch. For the machine translation task, we train our model using 453k pairs from the Europal corpus of German-English WMT 157, common crawl and news-commentary. We use the hyper-parameters suggested …","url":["https://arxiv.org/pdf/1803.01128"]} +{"year":"2018","title":"Sequence-to-sequence Models for Cache Transition Systems","authors":["X Peng, L Song, D Gildea, G Satta"],"snippet":"… Hidden state sizes for both encoder and decoder are set to 100. The word embeddings are initialized from Glove pretrained word embeddings (Pennington et al., 2014) on Common Crawl, and are not updated during training …","url":["https://www.cs.rochester.edu/u/gildea/pubs/peng-acl18.pdf"]} +{"year":"2018","title":"Shortcutting Label Propagation for Distributed Connected Components","authors":["S Stergiou, D Rughwani, K Tsioutsiouliklis - … Conference on Web Search and Data …, 2018"],"snippet":"… Note: OCR errors may be found in this Reference List extracted from the full text article. ACM has opted to expose the complete List rather than only correct and linked references. 1. 2016. Common Crawl. (2016). http://commoncrawl.org/. 2. 2016. Twitter Graph. (2016) …","url":["https://dl.acm.org/citation.cfm?id=3159696"]} +{"year":"2018","title":"Simple Algorithms For Sentiment Analysis On Sentiment Rich, Data Poor Domains.","authors":["PK Sarma, W Sethares - Proceedings of the 27th International Conference on …, 2018"],"snippet":"… Furthermore, in some of these domains, representing words from off-the-shelf word embeddings such as ones obtained from training word2vec, GloVe on Wikipedia or common-crawl may not be efficient. This is because the …","url":["http://www.aclweb.org/anthology/C18-1290"]} +{"year":"2018","title":"SLIND: Identifying Stable Links in Online Social Networks","authors":["J Zhang, L Tan, X Tao, X Zheng, Y Luo, JCW Lin - International Conference on …, 2018"],"snippet":"… The dataset chosen for this study, as well as for the demo, was crawled from Facebook and obtained from the repositories of the Common Crawl (August 2016) 1 . It is de-anonymized to reveal the following relational …","url":["https://link.springer.com/chapter/10.1007/978-3-319-91458-9_54"]} +{"year":"2018","title":"Smart Focused Web Crawler for Hidden Web","authors":["S Kaur, G Geetha - Information and Communication Technology for …, 2019"],"snippet":"… The number of partitions will depend on the number of URLs in site database. Tel-8 and common crawl datasets will be used. MapReduce function will be called, and the input will split into 64 MB plus copies of this on other clusters …","url":["https://link.springer.com/chapter/10.1007/978-981-13-0586-3_42"]} +{"year":"2018","title":"SocialLink: exploiting graph embeddings to link DBpedia entities to Twitter profiles","authors":["Y Nechaev, F Corcoglioniti, C Giuliano - Progress in Artificial Intelligence, 2018"],"snippet":"Page 1. Progress in Artificial Intelligence https://doi.org/10.1007/s13748-018-0160-x REGULAR PAPER SocialLink: exploiting graph embeddings to link DBpedia entities to Twitter profiles Yaroslav Nechaev1,2 · Francesco Corcoglioniti1 · Claudio Giuliano1 …","url":["https://link.springer.com/article/10.1007/s13748-018-0160-x"]} +{"year":"2018","title":"Software Requirements Classification Using Word Embeddings and Convolutional Neural Networks","authors":["VL Fong - 2018"],"snippet":"Page 1. SOFTWARE REQUIREMENTS CLASSIFICATION USING WORD EMBEDDINGS AND CONVOLUTIONAL NEURAL NETWORKS A Thesis presented to the Faculty of California Polytechnic State University, San Luis Obispo In Partial Fulfillment …","url":["https://digitalcommons.calpoly.edu/cgi/viewcontent.cgi?article=3249&context=theses"]} +{"year":"2018","title":"SOLVENT: A Mixed Initiative System for Finding Analogies between Research Papers","authors":["J CHAN, JC CHANG, TOM HOPE, D SHAHAF… - 2018"],"snippet":"… We originally used pre-trained GloVe [35] vectors trained on the Common Crawl dataset 5. However, baseline performance was very poor … Finally, we used an initial prototype GloVe model (with Common Crawl) to suggest new matches we might have missed …","url":["http://joelchan.me/assets/pdf/2018-cscw-schema-highlighter.pdf"]} +{"year":"2018","title":"Speech-Based Real-Time Presentation Tracking Using Semantic Matching","authors":["R Asadi - 2017"],"snippet":"Speech-Based Real-Time Presentation Tracking Using Semantic Matching. Abstract. Oral presentations are an essential yet challenging aspect of academic and professional life. To date, many commercial and research products …","url":["http://search.proquest.com/openview/a91c85d71f1e130a00bee5b6f95d90e3/1?pq-origsite=gscholar&cbl=18750&diss=y"]} +{"year":"2018","title":"Stop Illegal Comments: A Multi-Task Deep Learning Approach","authors":["A Elnaggar, B Waltl, I Glaser, J Landthaler… - arXiv preprint arXiv …, 2018"],"snippet":"… 12] and our Glove). The Fast Text is 2 million word vectors trained on Common Crawl with dimension 300, while Glove is 2.2 million word vectors trained on Common Crawl with dimension 300. Furthermore, we trained a custom …","url":["https://arxiv.org/pdf/1810.06665"]} +{"year":"2018","title":"Studio Ousia's Quiz Bowl Question Answering System at NIPS HCQA 2017","authors":["I Yamada"],"snippet":"… Moreover, we use the GloVe word embeddings [10] trained on the 840 billion Common Crawl corpus to initialize the word representations. We randomly select 10% questions from the dataset as a validation set and use the remaining questions to train the model …","url":["http://www.cs.umd.edu/~miyyer/data/Ikuya.pdf"]} +{"year":"2018","title":"Studio Ousia's Quiz Bowl Question Answering System","authors":["I Yamada, R Tamaki, H Shindo, Y Takefuji"],"snippet":"… 1,000. We use filter window sizes of 2, 3, 4, and 5, and 1,000 feature maps for each filter. We use the GloVe word embeddings [12] trained on the 840 billion Common Crawl corpus to initialize the word representations. As in …","url":["https://www.researchgate.net/profile/Yoshiyasu_Takefuji/publication/323535360_Studio_Ousia%27s_Quiz_Bowl_Question_Answering_System/links/5a9a6cde45851586a2aa0ade/Studio-Ousias-Quiz-Bowl-Question-Answering-System.pdf"]} +{"year":"2018","title":"Studying the Difference Between Natural and Programming Language Corpora","authors":["C Casalnuovo, K Sagae, P Devanbu - arXiv preprint arXiv:1806.02437, 2018"],"snippet":"… The German and Spanish corpora were selected from a sample of files from the unlabeled datasets from the ConLL 2017 Shared Task (Ginter et al, 2017), which consist of web text obtained from CommonCrawl.8 Like the 1 billion …","url":["https://arxiv.org/pdf/1806.02437"]} +{"year":"2018","title":"Style Transfer Through Back-Translation","authors":["S Prabhumoye, Y Tsvetkov, R Salakhutdinov, AW Black - arXiv preprint arXiv …, 2018"],"snippet":"… We used data from Workshop in Statistical Machine Translation 2015 (WMT15) (Bojar et al., 2015) to train our translation models. We used the French– English data from the Europarl v7 corpus, the news commentary …","url":["https://arxiv.org/pdf/1804.09000"]} +{"year":"2018","title":"SumeCzech: Large Czech News-Based Summarization Dataset","authors":["M Straka, N Mediankin, T Kocmi, Z Žabokrtský… - Proceedings of the Eleventh …, 2018"],"snippet":"… The raw data for the dataset was collected from the Common Crawl project2 using the Common Crawl API. Initially, five Czech news websites were selected to create the dataset: novinky.cz, lidovky.cz, denik.cz, idnes.cz, and ihned.cz …","url":["http://www.aclweb.org/anthology/L18-1551"]} +{"year":"2018","title":"Supplementary Material for “Improved Fusion of Visual and Language Representations by Dense Symmetric Co-Attention for Visual Question Answering”","authors":["DK Nguyen, T Okatani"],"snippet":"… All the questions were tokenized using Python Natural Language Toolkit (nltk) [2]. We used the vocabulary provided by the CommonCrawl-840B Glove model for English word vectors [11], and set out-of-vocabulary words to unk …","url":["http://openaccess.thecvf.com/content_cvpr_2018/Supplemental/3586-supp.pdf"]} +{"year":"2018","title":"Survey of Simple Neural Networks in Semantic Textual Similarity Analysis","authors":["DS Prijatelj, J Ventura, J Kalita"],"snippet":"… vectors. This specific set of word vectors have 300 dimensions and were pre-trained on 840 billion tokens taken from Common Crawl3. Different pretrained word vectors may be used in-place of this specific pretrained set …","url":["http://cs.uccs.edu/~jkalita/work/reu/REU2017/11Prijatelj.pdf"]} +{"year":"2018","title":"SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference","authors":["R Zellers, Y Bisk, R Schwartz, Y Choi - arXiv preprint arXiv:1808.05326, 2018"],"snippet":"… We consider three different types of word representations: 300d GloVe vectors from Common Crawl (Pennington et al., 2014), 300d Numberbatch vectors retrofitted using ConceptNet relations (Speer et al., 2017), and …","url":["https://arxiv.org/pdf/1808.05326"]} +{"year":"2018","title":"Systems and methods for improved user interface","authors":["Z Wei, T Nguyen, I Chan, KM Liou, H Wang, H Lu - US Patent App. 15/621,647, 2018"],"snippet":"Aspects of the present disclosure relate to systems and methods for a voice-centric virtual or soft keyboard (or keypad). Unlike other keyboards, embodiments of the present disclosure prioritize the voice keyboard, meanwhile providing users with a quick and uniform navigation to …","url":["https://patents.google.com/patent/US20180011688A1/en"]} +{"year":"2018","title":"T2S: An Encoder-Decoder Model for Topic-Based Natural Language Generation","authors":["W Ou, C Chen, J Ren - International Conference on Applications of Natural …, 2018"],"snippet":"… We initialize our word embeddings with publicly available 300-dimensional Glove vectors [13], which is trained on 840 billion tokens of Common Crawl data 2 . Words that do not exist in the pretrained Glove vectors are replaced by “” token …","url":["https://link.springer.com/chapter/10.1007/978-3-319-91947-8_15"]} +{"year":"2018","title":"TabVec: Table Vectors for Classification of Web Tables","authors":["M Ghasemi-Gol, P Szekely - arXiv preprint arXiv:1802.06290, 2018"],"snippet":"… They evaluated their system on the common crawl dataset, and reported signi cant improvement compared to previous feature based methods … Three of these datasets are from unusual domains, and one is a sample from Common Crawl …","url":["https://arxiv.org/pdf/1802.06290"]} +{"year":"2018","title":"TCS Research at SemEval-2018 Task 1: Learning Robust Representations using Multi-Attention Architecture","authors":["H Meisheri, L Dey - Proceedings of The 12th International Workshop on …, 2018"],"snippet":"… corpus which results in parallel attention mechanism - one set from the twitter space and another from a common crawl corpus … 2014) trained over common crawl corpus with 300 dimension vector, Character1 level embeddings trained …","url":["http://www.aclweb.org/anthology/S18-1043"]} +{"year":"2018","title":"Temporal Modular Networks for Retrieving Complex Compositional Activities in Videos","authors":["L Fei-Fei, JC Niebles"],"snippet":"… 4.2. The Stanford Parser [27] is used to obtain the initial parse trees for the compositional structure. For word vectors as part of the base module input, we use the 300-dimensional GloVe [36] vectors pretrained on Common Crawl (42 billion tokens) …","url":["http://svl.stanford.edu/assets/papers/liu2018eccv.pdf"]} +{"year":"2018","title":"Ten Years of WebTables","authors":["M Cafarella, A Halevy, H Lee, J Madhavan, C Yu… - Proceedings of the VLDB …, 2018"],"snippet":"… Several researchers produced web tables from the public Common Crawl [1, 24, 15], thereby making them available to a broad audience outside the large Web companies. Wang, et al. [36] improved ex- traction quality by leveraging curated knowledge bases …","url":["http://www.vldb.org/pvldb/vol11/p2140-cafarella.pdf"]} +{"year":"2018","title":"Text Embeddings for Retrieval From a Large Knowledge Base","authors":["T Cakaloglu, C Szegedy, X Xu - arXiv preprint arXiv:1810.10176, 2018"],"snippet":"… We specially utilized the ”glove-840B-300d” pre-trained word vectors where it was trained on using the common crawl within 840B tokens, 2.2M vocab, cased, 300d vectors. We created the GloVe representation of our corpus …","url":["https://arxiv.org/pdf/1810.10176"]} +{"year":"2018","title":"Text-based Sentiment Analysis and Music Emotion Recognition","authors":["E Çano - 2018"],"snippet":"… 38 3.4 Confusion matrix of lexicon-generated song labels . . . . . 42 5.1 Listofwordembeddingcorpora . . . . . 65 5.2 Google News compared with Common Crawl . . . . . 69 5.3 Propertiesofself_w2vmodels . . . . . 70 …","url":["https://www.researchgate.net/profile/Erion_Cano/publication/325651523_Text-based_Sentiment_Analysis_and_Music_Emotion_Recognition/links/5b1a8d640f7e9b68b429cdae/Text-based-Sentiment-Analysis-and-Music-Emotion-Recognition.pdf"]} +{"year":"2018","title":"Text-Driven Head Motion Synthesis Using Neural Networks","authors":["BTS Bojlén"],"snippet":"… We also compared embeddings trained on Common Crawl (a large collection of websites), and on Wikipedia and the Gigaword corpus of news articles. The model trained was a baseline RNN with the architecture specified in Table 4.2 …","url":["https://btao.org/static/dissertation.pdf"]} +{"year":"2018","title":"TEXTBUGGER: Generating Adversarial Text Against Real-world Applications","authors":["J Li, S Ji, T Du, B Li, T Wang"],"snippet":"… This is because we observe that the stop-words also have impact on the prediction results. In particular, our experiments utilize the 300-dimension GloVe embeddings7 trained on 840 billion tokens of Common Crawl. Words …","url":["https://nesa.zju.edu.cn/download/TEXTBUGGER%20Generating%20Adversarial%20Text%20Against%20Real-world%20Applications.pdf"]} +{"year":"2018","title":"The ADAPT System Description for the IWSLT 2018 Basque to English Translation Task","authors":["A Poncelas, A Way, K Sarasola - International Workshop on Spoken Language …, 2018"],"snippet":"… pair (see Table 4)[12]. In particular, we use the CommonCrawl, Europarl V7, NewsCommentary V12 and UN datasets for training, 5 the NewsTest 2008-2012 corpora for validation and NewsTest 2013 for testing. We did not use …","url":["https://workshop2018.iwslt.org/downloads/Proceedings_IWSLT_2018.pdf#page=91"]} +{"year":"2018","title":"The AFRL WMT18 Systems: Ensembling, Continuation and Combination","authors":["J Gwinnup, T Anderson, G Erdmann, K Young - … of the Third Conference on Machine …, 2018"],"snippet":"… We took the Russian and English monolingual CommonCrawl (Smith et al., 2013) data provided by the organizers and applied tokenization and BPE with our common, joint model … 2013. Dirt cheap web-scale parallel text from the common crawl …","url":["http://www.aclweb.org/anthology/W18-6411"]} +{"year":"2018","title":"The Geometry of Culture: Analyzing Meaning through Word Embeddings","authors":["AC Kozlowski, M Taddy, JA Evans - arXiv preprint arXiv:1803.09288, 2018"],"snippet":"Page 1. The Geometry of Culture: Analyzing Meaning through Word Embeddings Austin C. Kozlowski​1 Matt Taddy​2,3 James A. Evans​1 1 ​University of Chicago, Department of Sociology 2 ​University of Chicago, Booth School of Business 3 ​Amazon …","url":["https://arxiv.org/pdf/1803.09288"]} +{"year":"2018","title":"The Importance of Subword Embeddings in Sentence Pair Modeling","authors":["W Lan, W Xu"],"snippet":"… GloVe word vectors (Pennington et al.), trained on 27 billion words from Twitter (vocabulary size of 1.2 milion words) for social media datasets, and 300-dimensional GloVe vectors, trained on 840 billion words (vocabulary …","url":["https://pdfs.semanticscholar.org/c99f/e106e7d1cc62f7cb73ea6fc745b8679e4d2f.pdf"]} +{"year":"2018","title":"The Knowledge and Language Gap in Medical Information Seeking","authors":["L Soldaini - 2018"],"snippet":"The Knowledge and Language Gap in Medical Information Seeking. Abstract. Interest in medical information retrieval has risen significantly in the last few years. The Internet has become a primary source for consumers looking …","url":["http://search.proquest.com/openview/e669cd1478b33d52fa4cc71e8393c639/1?pq-origsite=gscholar&cbl=18750&diss=y"]} +{"year":"2018","title":"The MLLP-UPV German-English Machine Translation System for WMT18","authors":["J Iranzo-Sánchez, P Baquero-Arnal, GVG Dıaz-Munıo…"],"snippet":"… 422 Page 2. Table 1: Size by corpus of the WMT18 parallel dataset Corpus Sentences (M) News Commentary v13 0.3 Rapid (press releases) 1.3 Common Crawl 1.9 Europarl v7 2.4 ParaCrawl 36.4 WMT18 total 42.3 the rest of the WMT corpora …","url":["http://www.statmt.org/wmt18/pdf/WMT041.pdf"]} +{"year":"2018","title":"The Natural Language Decathlon: Multitask Learning as Question Answering","authors":["B McCann, NS Keskar, C Xiong, R Socher - arXiv preprint arXiv:1806.08730, 2018"],"snippet":"Page 1. The Natural Language Decathlon: Multitask Learning as Question Answering Bryan McCann, Nitish Shirish Keskar, Caiming Xiong, Richard Socher Salesforce Research {bmccann,nkeskar,cxiong,rsocher}@salesforce.com Abstract …","url":["https://arxiv.org/pdf/1806.08730"]} +{"year":"2018","title":"The RWTH Aachen Machine Translation Systems for IWSLT 2017","authors":["P Bahar, J Rosendahl, N Rossenbach, H Ney"],"snippet":"… The majority of removed sentence pairs are part of the Common Crawl (300k sentences ie 14% of Common Crawl) and the OpenSubtitles corpora (1000k sentences ie 8% of OpenSubtitles) … Common Crawl Europarl UN News Comment OpenSub QED TED Wiki Total …","url":["https://www-i6.informatik.rwth-aachen.de/publications/download/1061/BaharParniaRosendahlJanRossenbachNickNeyHermann--TheRWTHAachenMachineTranslationSystemsforIWSLT2017--2017.pdf"]} +{"year":"2018","title":"The RWTH Aachen University Filtering System for the WMT 2018 Parallel Corpus Filtering Task","authors":["N Rossenbach, J Rosendahl, Y Kim, M Graça… - Proceedings of the Third …, 2018"],"snippet":"… We train IBM1 models for both directions (s2t and t2s) using the bilingual data from the WMT 2018 German↔English task namely the Europarl, CommonCrawl, NewsCommentary and Rapid corpus. 4.3 Neural Network Language Model …","url":["http://www.aclweb.org/anthology/W18-6487"]} +{"year":"2018","title":"The RWTH Aachen University Supervised Machine Translation Systems for WMT 2018","authors":["J Schamper, J Rosendahl, P Bahar, Y Kim, A Nix… - Proceedings of the Third …, 2018"],"snippet":"… 1.4% BLEU. The Transformer model was trained using the standard parallel WMT 2018 data sets (namely Europarl, CommonCrawl, NewsCommentary and Rapid, in total 5.9M sentence pairs) as well as the 4.2M sen3http://www …","url":["http://www.aclweb.org/anthology/W18-6426"]} +{"year":"2018","title":"The Speechmatics Parallel Corpus Filtering System for WMT18","authors":["T Ash, R Francis, W Williams - Proceedings of the Third Conference on Machine …, 2018"],"snippet":"… This data comprises the data for the WMT 2018 news translation task data for German-English without the Paracrawl parallel corpus. This data is approximately 130M words, drawn from Europarl, Common Crawl, News …","url":["http://www.aclweb.org/anthology/W18-6472"]} +{"year":"2018","title":"The study of keyword search in open source search engines and digital forensics tools with respect to the needs of cyber crime investigations","authors":["J Hansen - 2017"],"snippet":"Page 1. The study of keyword search in open source search engines and digital forensics tools with respect to the needs of cyber crime investigations Joachim Hansen Master in Information Security Supervisor: Katrin Franke …","url":["https://brage.bibsys.no/xmlui/bitstream/handle/11250/2479196/18187_FULLTEXT.pdf?sequence=1"]} +{"year":"2018","title":"The University of Cambridge's Machine Translation Systems for WMT18","authors":["F Stahlberg, A de Gispert, B Byrne - arXiv preprint arXiv:1808.09465, 2018"],"snippet":"… Page 3. Corpus Over-sampling #Sentences Common Crawl 2x 4.43M Europarl v7 2x 3.76M News Commentary v13 2x 0.57M Rapid 2016 2x 2.27M ParaCrawl 1x 11.16M Synthetic (news-2017) 1x 20.00M Total 42.19M Table …","url":["https://arxiv.org/pdf/1808.09465"]} +{"year":"2018","title":"The University of Edinburgh's Submissions to the WMT18 News Translation Task","authors":["B Haddow, N Bogoychev, D Emelin, U Germann… - Proceedings of the Third …, 2018"],"snippet":"… closely Corpus % Back translations1 50% CommonCrawl 5% Europarl 15% News-commentary 10% ParaCrawl 10% Rapid 10% Table 1: Blend of data for training the DE↔EN en- semble models (40M sentence pairs total) …","url":["http://www.aclweb.org/anthology/W18-6412"]} +{"year":"2018","title":"The USTC-NEL Speech Translation system at IWSLT 2018","authors":["D Liu, J Liu, W Guo, S Xiong, Z Ma, R Song, C Wu… - arXiv preprint arXiv …, 2018"],"snippet":"… Page 2. Table 2: text training data. Corpus raw filtered commoncrawl 2.39M 1.80M rapid 1.32M 1.00M europal 1.92M 1.81M commentary 0.284M 0.233M paracrawl 36.35M 12.35M opensubtitles 22.51M 14.24M WIT3(in domain) 0.209M 0.207M …","url":["https://arxiv.org/pdf/1812.02455"]} +{"year":"2018","title":"Topic coherence analysis for the classification of Alzheimer's disease}}","authors":["A Pompili, A Abad, DM de Matos, IP Martins - Proc. IberSPEECH 2018, 2018"],"snippet":"… regularities among sentences. To this purpose, we rely on a pre-trained model of word vector representations containing 2 million word vectors, in 300 dimensions, trained with fastText on Common Crawl [27]. In the process …","url":["https://www.isca-speech.org/archive/IberSPEECH_2018/pdfs/IberS18_O5-1_Pompili.pdf"]} +{"year":"2018","title":"Topic Modeling for Analyzing Open-Ended Survey Responses","authors":["AS Pietsch, S Lessmann"],"snippet":"… Word2Vec [32] and the second one on Common Crawl web data via Global Vectors for Word Representation (GloVe) [33] … The set is trained on 42 billion tokens of Common Crawl web data and contains 300-dimensional vectors …","url":["https://www.wiwi.hu-berlin.de/de/forschung/irtg/results/discussion-papers/discussion-papers-2017-1/irtg1792dp2018-054.pdf"]} +{"year":"2018","title":"Toward better reasoning from natural language","authors":["A Purtee - 2018"],"snippet":"Page 1. Toward Better Reasoning from Natural Language by Adam Lee Purtee Submitted in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy Supervised by Professor Lenhart Schubert and …","url":["https://urresearch.rochester.edu/fileDownloadForInstitutionalItem.action?itemId=34810&itemFileId=186239"]} +{"year":"2018","title":"Towards Automated Factchecking: Developing an Annotation Schema and Benchmark for Consistent Automated Claim Detection","authors":["L Konstantinovskiy, O Price, M Babakar, A Zubiaga - arXiv preprint arXiv:1809.08193, 2018"],"snippet":"… The method provided by InferSent involves words be- ing converted to their common crawl GloVe implementations before being passed through a bidirectional long-short-term memory (BiLSTM) network (Hochreiter & Schmidhuber, 1997) …","url":["https://arxiv.org/pdf/1809.08193"]} +{"year":"2018","title":"Towards Knowledge Graph Construction from Entity Co-occurrence","authors":["N Heist"],"snippet":"… patterns. 2 https://www.mturk.com/ 3 Pages starting with List of in http://downloads. dbpedia.org/2016-10/corei18n/en/labels en.ttl.bz2 4 http://commoncrawl.org/ 5 http://webdatacommons.org/structureddata/#results-2017-1 Page 7 …","url":["https://people.kmi.open.ac.uk/francesco/wp-content/uploads/2018/11/EKAWDC2018_3.pdf"]} +{"year":"2018","title":"Towards Linear Time Neural Machine Translation with Capsule Networks","authors":["M Wang, J Xie, Z Tan, J Su - arXiv preprint arXiv:1811.00287, 2018"],"snippet":"… French translation are presented in Table 1. We compare CAPSNMT with various other systems including the winning system in WMT'14 (Buck et al., 2014), a phrase-based system whose language models were trained on …","url":["https://arxiv.org/pdf/1811.00287"]} +{"year":"2018","title":"Towards Personalized Learning using Counterfactual Inference for Randomized Controlled Trials","authors":["S Zhao - 2018"],"snippet":"Page 1. Towards Personalized Learning using Counterfactual Inference for Randomized Controlled Trials by Siyuan Zhao A Dissertation Submitted to the Faculty of the WORCESTER POLYTECHNIC INSTITUTE …","url":["https://web.wpi.edu/Pubs/ETD/Available/etd-042618-010745/unrestricted/szhao.pdf"]} +{"year":"2018","title":"Towards Two-Dimensional Sequence to Sequence Model in Neural Machine Translation","authors":["P Bahar, C Brix, H Ney - arXiv preprint arXiv:1810.03975, 2018"],"snippet":"… 5 Experiments We have done the experiments on the WMT 2017 German→English and English→German news tasks consisting of 4.6M training samples collected from the well-known data sets Europarl-v7, News-Commentary-v10 and Common-Crawl …","url":["https://arxiv.org/pdf/1810.03975"]} +{"year":"2018","title":"Training a Neural Network in a Low-Resource Setting on Automatically Annotated Noisy Data","authors":["MA Hedderich, D Klakow - arXiv preprint arXiv:1807.00745, 2018"],"snippet":"… Page 4. tries other than Britain until the scientific” where ”Britain” is the target word with label y = LOC. Sentence boundaries are padded. We encode the words using the 300-dimensional GloVe vectors trained on cased text …","url":["https://arxiv.org/pdf/1807.00745"]} +{"year":"2018","title":"Training Tips for the Transformer Model","authors":["M Popel, O Bojar - arXiv preprint arXiv:1804.00247, 2018"],"snippet":"… commoncrawl 161 k 3.3 M 2.9 M … Most of our training data comes from the CzEng parallel treebank, version 1.7 (57M sentence pairs), and the rest (1M sentence pairs) comes from three smaller sources (Europarl, News …","url":["https://arxiv.org/pdf/1804.00247"]} +{"year":"2018","title":"Transfer Learning from LDA to BiLSTM-CNN for Offensive Language Detection in Twitter","authors":["G Wiedemann, E Ruppert, R Jindal, C Biemann - Austrian Academy of Sciences …, 2018"],"snippet":"… classification labels per task. have not been seen during training the embedding model. We use a model pre-trained with German language data from Wikipedia and Common Crawl provided by Mikolov et al.(2018). First, we unify all …","url":["https://www.oeaw.ac.at/fileadmin/subsites/academiaecorpora/PDF/GermEval2018_Proceedings.pdf#page=91"]} +{"year":"2018","title":"Transferred Embeddings for Igbo Similarity, Analogy and Diacritic Restoration Tasks","authors":["IEMHI Onyenwe, C Enemuo - COLING 2018, 2018"],"snippet":"… org news dataset. • igWkSbwd from same as igWkNews but with subword information. • igWkCrl from fastText Common Crawl dataset Table 1 shows the vocabulary lengths (vocabs), and the dimensions (vectors) of each of the models used in our experiments …","url":["http://www.aclweb.org/anthology/W18-40#page=40"]} +{"year":"2018","title":"Transferred Embeddings for Igbo Similarity, Analogy, and Diacritic Restoration Tasks","authors":["I Ezeani, I Onyenwe, M Hepple - Proceedings of the Third Workshop on Semantic …, 2018"],"snippet":"… igWkSbwd from same as igWkNews but with subword information. • igWkCrl from fastText Common Crawl dataset Table 1 shows the vocabulary lengths (vocabs), and the dimensions (vectors) of each of the models used in our experiments. 3 Model Evaluation …","url":["http://www.aclweb.org/anthology/W18-4004"]} +{"year":"2018","title":"Translation of Biomedical Documents with Focus on Spanish-English","authors":["MS Duma, W Menzel - Proceedings of the Third Conference on Machine …, 2018"],"snippet":"… 2http://commoncrawl.org/ 3https://paracrawl.eu/index.html … Track / Corpora EN-ES EN-PT EN-RO Commoncrawl 1.8M - - Paracrawl - 2.1M 2.4M Wikipedia 1.6M 1.6M - EMEA 678K 1.08M 994K Scielo-gma 2016 166K 613K - Table 1: Corpora used for DSTF 3.2 Tools …","url":["http://www.aclweb.org/anthology/W18-6444"]} +{"year":"2018","title":"Two-Step Multi-factor Attention Neural Network for Answer Selection","authors":["P Zhang, Y Hou, Z Su, Y Su - Pacific Rim International Conference on Artificial …, 2018"],"snippet":"… 3.3 Experimental Settings. We initialize word embeddings with 300-dimensional GloVe word vectors pre-trained from the 840B Common Crawl corpus [20]. For out of vocabulary (OOV) words, their embeddings are initialized randomly …","url":["https://link.springer.com/chapter/10.1007/978-3-319-97304-3_50"]} +{"year":"2018","title":"UBC-NLP at IEST 2018: Learning Implicit Emotion With an Ensemble of Language Models","authors":["H Alhuzali, M Elaraby, M Abdul-Mageed"],"snippet":"… morphology like English. Additionally, fastText partially solves issues with out-of-vocabulary words since it exploits character sequences. FastText is trained on the Common Crawl dataset, consisting of 600B tokens. For this and the …","url":["https://mageed.sites.olt.ubc.ca/files/2018/09/emnlp18_IEST_WASSA_2018.pdf"]} +{"year":"2018","title":"Uncovering divergent linguistic information in word embeddings with lessons for intrinsic and extrinsic evaluation","authors":["M Artetxe, G Labaka, I Lopez-Gazpio, E Agirre - arXiv preprint arXiv:1809.02094, 2018"],"snippet":"… analogies. We use the largest pre-trained model published by the authors9, which was trained on 840 billion words of the Common Crawl corpus and contains 300dimensional vectors for 2.2 million words. Fasttext (Bojanowski …","url":["https://arxiv.org/pdf/1809.02094"]} +{"year":"2018","title":"Understanding Back-Translation at Scale","authors":["S Edunov, M Ott, M Auli, D Grangier - arXiv preprint arXiv:1808.09381, 2018"],"snippet":"… It also allows us to estimate the value of BT data for domain adaptation since the newscrawl corpus (BT-news) is pure news whereas the bitext is a mixture of eu- roparl and commoncrawl with only a small newscommentary portion …","url":["https://arxiv.org/pdf/1808.09381"]} +{"year":"2018","title":"Understanding Search Queries in Natural Language","authors":["Z Neverilová, M Kvaššay - RASLAN 2018 Recent Advances in Slavonic Natural …, 2018"],"snippet":"… stop-words are removed. Tokens are mapped to 300-dimensional word embeddings using publicly available vocabulary of FastText [2] vectors trained on CommonCrawl dataset. Missing words are ignored. Vectors are then …","url":["http://nlp.fi.muni.cz/raslan/raslan18.pdf#page=93"]} +{"year":"2018","title":"Unsupervised Disambiguation of Abstract Syntax","authors":["O KALLDAL, M LUDVIGSSON"],"snippet":"Page 1. NoPConj : PConj AAnter : Ant TTAnt : Temp whoever_NP : NP NoVoc : Voc PPos : Pol is_right_VP : VP PhrUtt : Phr TPast : Tense PredVP : Cl UseCl : S UttS : Utt Unsupervised Disambiguation of Abstract Syntax A Language …","url":["http://publications.lib.chalmers.se/records/fulltext/255307/255307.pdf"]} +{"year":"2018","title":"Unsupervised Domain Adaptation by Adversarial Learning for Robust Speech Recognition","authors":["P Denisov, NT Vu, MF Font - arXiv preprint arXiv:1807.11284, 2018"],"snippet":"… Summary of the used corpora is given in Tab. 1. In addition to that, 197 millions words of Italian Deduplicated CommonCrawl Text are used to build Italian language model. Italian dictionary ILE with pronunciations for 588k words is used as a lexicon. 3.2 Baseline …","url":["https://arxiv.org/pdf/1807.11284"]} +{"year":"2018","title":"Unsupervised Mining of Analogical Frames by Constraint Satisfaction","authors":["L De Vine, S Geva, P Bruza - Australasian Language Technology Association …"],"snippet":"… 2 a3, 3 Figure 4: Determining an analogy completion from a larger frame We conducted experiments with embeddings constructed by ourselves as well as with publicly accessible embeddings from the fastText web site2 trained …","url":["http://alta2018.alta.asn.au/alta2018-draft-proceedings.pdf#page=44"]} +{"year":"2018","title":"Unsupervised Neural Machine Translation Initialized by Unsupervised Statistical Machine Translation","authors":["B Marie, A Fujita - arXiv preprint arXiv:1810.12703, 2018"],"snippet":"… These methods usually exploit existing accurate translation models and have shown to be useful especially when targeting 1See for instance the Common Crawl project: http:// commoncrawl.org/ low-resource language pairs and domains …","url":["https://arxiv.org/pdf/1810.12703"]} +{"year":"2018","title":"Unsupervised Post-processing of Word Vectors via Conceptor Negation","authors":["T Liu, L Ungar, J Sedoc - arXiv preprint arXiv:1811.11001, 2018"],"snippet":"… We use the publicly available pre-trained Google News Word2Vec (Mikolov et al. 2013)5 and Common Crawl GloVe6 (Pennington, Socher, and Manning 2014) to perform lexical-level experiments. For CN, we fix α = 2 for …","url":["https://arxiv.org/pdf/1811.11001"]} +{"year":"2018","title":"Unsupervised semantic frame induction using triclustering","authors":["D Ustalov, A Panchenko, A Kutuzov, C Biemann… - arXiv preprint arXiv …, 2018"],"snippet":"… In our evaluation, we use triple frequencies from the DepCC dataset (Panchenko et al., 2018) , which is a dependency-parsed version of the Common Crawl corpus, and the standard 300-dimensional word embeddings …","url":["https://arxiv.org/pdf/1805.04715"]} +{"year":"2018","title":"Unsupervised Sense-Aware Hypernymy Extraction","authors":["D Ustalov, A Panchenko, C Biemann, SP Ponzetto - arXiv preprint arXiv:1809.06223, 2018"],"snippet":"… Recent approaches to hypernym extraction went into three directions: (1) unsupervised methods based on such huge corpora as CommonCrawl1 to ensure extraction coverage using Hearst (1992) patterns (Seitner et al …","url":["https://arxiv.org/pdf/1809.06223"]} +{"year":"2018","title":"User-Centric Ontology Population","authors":["K Clarkson, AL Gentile, D Gruhl, P Ristoski, J Terdiman…"],"snippet":"… Ristoski et al. [29] use standard word embeddings and graph embeddings to align instances extracted from the Common Crawl4 to the DBpedia ontology. The use of deep learning models has also been explored for this task. Dong et al …","url":["https://2018.eswc-conferences.org/wp-content/uploads/2018/02/ESWC2018_paper_10.pdf"]} +{"year":"2018","title":"Using a Stacked Residual LSTM Model for Sentiment Intensity Prediction","authors":["J Wang, B Peng, X Zhang - Neurocomputing, 2018"],"snippet":"… To enhance performance of LSTM layers, we also introduce a bi-directional strategy [34]. The word embeddings used in this experiment was respectively pre-trained on Common Crawl 840B 2 (English) and wiki dumps 3 (Chinese) by GloVe [55] …","url":["https://www.sciencedirect.com/science/article/pii/S0925231218311226"]} +{"year":"2018","title":"Using context to identify the language of face-saving","authors":["N Naderi, G Hirst"],"snippet":"… For all our Neural Network models, we initialized our word representations using the publicly available GloVe pre-trained word embeddings (Pennington et al., 2014)8 (300-dimensional vectors trained on Common Crawl data) …","url":["ftp://ftp.db.toronto.edu/public_html/cs/ftp/public_html/pub/gh/Naderi+Hirst-ArgMining-2018.pdf"]} +{"year":"2018","title":"Using Deep Learning For Title-Based Semantic Subject Indexing To Reach Competitive Performance to Full-Text","authors":["F Mai, L Galke, A Scherp - arXiv preprint arXiv:1801.06717, 2018"],"snippet":"… We adopt the preprocessing and tokenization scheme of Galke et. al [5]. For the LSTM and CNN, we use 300-dimensional pretrained word embeddings obtained from training GloVe [28] on Common Crawl with 840 billion tokens7. Out-of-vocabulary words are discarded …","url":["https://arxiv.org/pdf/1801.06717"]} +{"year":"2018","title":"Using Machine Learning to Detect Malicious Websites","authors":["R Elsaleh - 2018"],"snippet":"… Benign Data Benign data was obtained from the Common Crawl6. The Common Crawl is a massive, continuously updated collection of crawled websites available for download … PhishTank 36,485 65,000 VirusTotal 5,036 0 6 http://commoncrawl.org/ Page 21. 12 …","url":["http://search.proquest.com/openview/da712bc2891c9bddbdc64e287a72dcc1/1?pq-origsite=gscholar&cbl=18750&diss=y"]} +{"year":"2018","title":"Using Monolingual Data in Neural Machine Translation: a Systematic Study","authors":["F Burlot, F Yvon - Proceedings of the Third Conference on Machine …, 2018"],"snippet":"… For German, we use samples from News-Commentary-11, Rapid, Common-Crawl (WMT 2017) and Multi- UN (see table 1). Bilingual BPE units (Sennrich et al., 2016b) are learned with 50k merge operations, yielding …","url":["http://www.aclweb.org/anthology/W18-6315"]} +{"year":"2018","title":"Using Wikipedia Edits in Low Resource Grammatical Error Correction","authors":["A Boyd"],"snippet":"… 2.4 Language Model For reranking, we train a language model on the first one billion lines (~12 billion tokens) of the deduplicated German Common Crawl corpus (Buck et al., 2014). 3 Method … 2014. N-gram counts and language models from the Common Crawl …","url":["http://noisy-text.github.io/2018/pdf/W-NUT201811.pdf"]} +{"year":"2018","title":"Using Word Embeddings for Information Retrieval: How Collection and Term Normalization Choices Affect Performance","authors":["D Roy, D Ganguly, S Bhatia, S Bedathur, M Mitra - 2018"],"snippet":"… In future, we plan to solidify these observations to offer general best practices for a range of different neural IR methods (eg DRRM[7]) as well as experiment using large datasets (eg Common Crawl). REFERENCES [1] Qingyao Ai …","url":["http://sumitbhatia.net/papers/cikm18.pdf"]} +{"year":"2018","title":"Utilizing Neural Networks and Linguistic Metadata for Early Detection of Depression Indications in Text Sequences","authors":["M Trotzek, S Koitka, CM Friedrich - arXiv preprint arXiv:1804.07000, 2018"],"snippet":"Page 1. SUBMITTED FOR PUBLICATION TO THE IEEE, 2018 1 Utilizing Neural Networks and Linguistic Metadata for Early Detection of Depression Indications in Text Sequences Marcel Trotzek, Sven Koitka, and Christoph M. Friedrich, Member, IEEE …","url":["https://arxiv.org/pdf/1804.07000"]} +{"year":"2018","title":"UWB at SemEval-2018 Task 10: Capturing Discriminative Attributes from Word Distributions","authors":["T Brychcín, T Hercig, J Steinberger, M Konkol - … of The 12th International Workshop on …, 2018"],"snippet":"… SS-GloVe 6B, Wikipedia + Gigaword 5 n = 300 62.0% 62.5% SS-GloVe 42B, Common Crawl n = 300 62.6% 62.7% SS-GloVe 840B, Common Crawl n = 300 62.1% 62.6 … SS-LDA 1-5B, Wikipedia n = 200 60.5% 63.1 …","url":["http://www.aclweb.org/anthology/S18-1153"]} +{"year":"2018","title":"Vecsigrafo: Corpus-based Word-Concept Embeddings","authors":["R Denaux, JM Gomez-Perez"],"snippet":"… To compare our embeddings to those trained on a very large corpus, we use pre-calculated GloVe embeddings that were trained on CommonCrawl7. Besides the text corpora, the tested embeddings con …","url":["http://semantic-web-journal.net/system/files/swj1864.pdf"]} +{"year":"2018","title":"Visual and affective grounding in language and mind","authors":["S De Deyne, DJ Navarro, G Collell, A Perfors"],"snippet":"… We also included an extremely large corpus consisting of 840 billion words from the Common Crawl project.5 As before, the language vectors were combined in a multimodal visual or affective model and the correlations were optimized by fitting values of β …","url":["https://compcogscisydney.org/publications/DeDeyneNCP_grounding.pdf"]} +{"year":"2018","title":"Visual Concept Selection with Textual Knowledge for Understanding Activities of Daily Living and Life Moment Retrieval","authors":["TH Tang12, MH Fu, HH Huang, KT Chen, HH Chen13"],"snippet":"… Page 8. GloVe [9] trained on Common Crawl with 840B tokens and ConceptNet Numberbatch [8]. The comparison in percentage dissimilarity [1] is shown in Table 1, where (G) and (N) denote GloVe and ConceptNet Numberbatch word vectors, respectively …","url":["http://ceur-ws.org/Vol-2125/paper_124.pdf"]} +{"year":"2018","title":"Visual Question Answering using Explicit Visual Attention","authors":["V Lioutas, N Passalis, A Tefas - Circuits and Systems (ISCAS), 2018 IEEE …, 2018"],"snippet":"… For extracting textual representations we used pre-trained GloVe embedding vectors (Common Crawl (42B tokens), 300d) [1]. Note that the GloVe embeddings were used only for initialization and then they were optimized during the training …","url":["https://ieeexplore.ieee.org/abstract/document/8351158/"]} +{"year":"2018","title":"Visual Relationship Detection Based on Guided Proposals and Semantic Knowledge Distillation","authors":["F Plesse, A Ginsca, B Delezoide, F Prêteux - arXiv preprint arXiv:1805.10802, 2018"],"snippet":"… iterations. The word embeddings used by the semantic knowledge introduced in Section 2.1 were obtained from the publicly available Glove model [19] trained on the Common Crawl corpus, consisting of 42B tokens. 4.2. Results …","url":["https://arxiv.org/pdf/1805.10802"]} +{"year":"2018","title":"Von Mises-Fisher Loss for Training Sequence to Sequence Models with Continuous Outputs","authors":["S Kumar, Y Tsvetkov - arXiv preprint arXiv:1812.04616, 2018"],"snippet":"… target word embeddings for English and French on corpora constructed using WMT'16 (Bojar et al., 2016) monolingual datasets containing data from Europarl, News Commentary, News Crawl from 2007 to 2015 and News …","url":["https://arxiv.org/pdf/1812.04616"]} +{"year":"2018","title":"WBI at CLEF eHealth 2018 Task 1: Language-independent ICD-10 coding using multi-lingual embeddings and recurrent neural networks","authors":["J Ševa, M Sänger, U Leser - 2018"],"snippet":"… Each token is represented using pre-trained fastText5 word embeddings [4]. We utilize fastText embedding models for French, Italian and Hungarian trained on Common Crawl and Wikipedia articles6. Independently from …","url":["http://ceur-ws.org/Vol-2125/paper_118.pdf"]} +{"year":"2018","title":"Weaver: Deep Co-Encoding of Questions and Documents for Machine Reading","authors":["M Raison, PE Mazaré, R Das, A Bordes - arXiv preprint arXiv:1804.10490, 2018"],"snippet":"… Unless otherwise noted, we use 300dimensional FastText word embeddings trained on Common Crawl (Mikolov et al., 2017) and keep them fixed during training. Out-of-vocabulary words are represented with a fixed randomly initialized vector …","url":["https://arxiv.org/pdf/1804.10490"]} +{"year":"2018","title":"Web archives and Knowledge organisation","authors":["NO Finnemann, D Phil"],"snippet":"… Internet Archive, established in 1996, and Common Crawl (commoncrawl.org) established in 2007.12 Since 2006 the Internet Archive also provide a subscriptionbased archive service, Archive-it (archive-it.org) allowing anybody …","url":["https://curis.ku.dk/ws/files/189392223/Web_Archives_Manuscript.pdf"]} +{"year":"2018","title":"What can we learn from Semantic Tagging?","authors":["M Abdou, A Kulmizev, V Ravishankar, L Abzianidze… - arXiv preprint arXiv …, 2018"],"snippet":"… sets of experiments: we optimized using Adam with a learning rate of 0.00005; we weight the auxiliary semantic tagging loss with λ = 0.1; the pre-trained word embeddings we use are GloVe embeddings of dimension 300 trained …","url":["https://arxiv.org/pdf/1808.09716"]} +{"year":"2018","title":"What's Cached is Prologue: Reviewing Recent Web Archives Research Towards Supporting Scholarly Use","authors":["E Maemura"],"snippet":"… Internet Archive. Samar et al. (2016) analyze coverage of trending topics for the Netherlands in 2014 by comparing the National Library of the Netherlands' web archive to the Common Crawl dataset. Milligan et al. (2016) use …","url":["https://tspace.library.utoronto.ca/bitstream/1807/89426/1/Maemura%20AM2018%20Paper-Postprint.pdf"]} +{"year":"2018","title":"When data permutations are pathological: the case of neural natural language inference","authors":["N Schluter, D Varab - Proceedings of the 2018 Conference on Empirical …, 2018"],"snippet":"… via an LSTM. Other hyperparameters. We use 300 dimensional GloVe embeddings trained on the Common Crawl 840B tokens dataset (Pennington et al., 2014), which remain fixed during training. Out of vocabulary (OOV …","url":["http://www.aclweb.org/anthology/D18-1534"]} +{"year":"2018","title":"Who gets held accountable when a facial recognition algorithm fails?","authors":["E Broad - IQ: The RIM Quarterly, 2018"],"snippet":"… In that experiment, the machine learning tool was trained on what's called a “common crawl” corpus: a list of 840 billion words in material published on the Web. Training AI on historical data can freeze our society in its current setting, or even turn it back …","url":["https://search.informit.com.au/documentSummary;dn=965944620566147;res=IELBus"]} +{"year":"2018","title":"WikiAtomicEdits: A Multilingual Corpus of Wikipedia Edits for Modeling Language and Discourse Supplementary Material","authors":["M Faruqui, E Pavlick, I Tenney, D Das"],"snippet":"… 3.3 Experimental Setting We use FastText (Mikolov et al., 2018; Grave et al., 2018)3 word vectors of length 300, originally trained on more than 600 billion word to- kens each from Common Crawl corpus for each language …","url":["http://anthology.aclweb.org/attachments/D/D18/D18-1028.Attachment.pdf"]} +{"year":"2018","title":"Wikipedia Text Reuse: Within and Without","authors":["M Alshomary, M Völske, T Licht, H Wachsmuth, B Stein… - arXiv preprint arXiv …, 2018"],"snippet":"… Abstract. We study text reuse related to Wikipedia at scale by compiling the first corpus of text reuse cases within Wikipedia as well as without (ie, reuse of Wikipedia text in a sample of the Common Crawl) … 3 …","url":["https://arxiv.org/pdf/1812.09221"]} +{"year":"2018","title":"WikiRef: Wikilinks as a route to recommending appropriate references for scientific Wikipedia pages","authors":["A Jana, P Kanojiya, A Mukherjee, P Goyal - arXiv preprint arXiv:1806.04092, 2018"],"snippet":"… proposed by Conneau et al. (2017). Note that, for applying this architecture we use the GloVe vectors trained on Common Crawl data (840B tokens)7as seeds for representing words in a document. We name these variants of …","url":["https://arxiv.org/pdf/1806.04092"]} +{"year":"2018","title":"Will it Blend? Blending Weak and Strong Labeled Data in a Neural Network for Argumentation Mining","authors":["E Shnarch, C Alzate, L Dankin, M Gleize, Y Hou… - Proceedings of the 56th …, 2018"],"snippet":"… maximum global norm of 1.0. Words are represented using the 300 dimensional GloVe embeddings learned on 840B Common Crawl tokens and are left untouched during training (Pennington et al., 2014). We note that even …","url":["http://www.aclweb.org/anthology/P18-2095"]} +{"year":"2018","title":"Word embedding for French natural language in healthcare: a comparative study","authors":["E DYNOMANT, R LELONG, B DAHAMNA…"],"snippet":"… [30] compared the three word embedding methods but the three models were trained on different datasets (Word2Vec on news data, while FastText and GloVe trained on more definitional data, Wikipedia and Common Crawl respectively) …","url":["https://preprints.jmir.org/preprint/download/12310/pdf"]} +{"year":"2018","title":"Word embeddings for monolingual and cross-lingual domain-specific information retrieval","authors":["C Wigder - 2018"],"snippet":"Page 1. Word embeddings for monolingual and cross-lingual domain-specific information retrieval CHAYA WIGDER Master in Computer Science Date: June 4, 2018 Supervisor: Johan Boye Examiner: Viggo Kann Swedish title …","url":["http://www.nada.kth.se/~ann/exjobb/chaya_wigder.pdf"]} +{"year":"2018","title":"Word Emotion Induction for Multiple Languages as a Deep Multi-Task Learning Problem","authors":["S Buechel, U Hahn"],"snippet":"… experiments, we rely on the following widely used, publicly available embedding models trained on very large corpora (summarized in Table 3): the SGNS model trained on the Google News corpus2 (GOOGLE), the …","url":["https://www.researchgate.net/profile/Sven_Buechel/publication/325019685_Word_Emotion_Induction_for_Multiple_Languages_as_a_Deep_Multi-Task_Learning_Problem/links/5af1b275aca272bf425628a9/Word-Emotion-Induction-for-Multiple-Languages-as-a-Deep-Multi-Task-Learning-Problem.pdf"]} +{"year":"2018","title":"Word2Bits-Quantized Word Vectors","authors":["M Lam - arXiv preprint arXiv:1803.05651, 2018"],"snippet":"… complete picture of the relative performance of the two. We would also like to train quantized word vectors on much larger corpuses of data such as Common Crawl or Google News. Another task is to validate that overfitting occurs …","url":["https://arxiv.org/pdf/1803.05651"]} +{"year":"2018","title":"XNLI: Evaluating Cross-lingual Sentence Representations","authors":["A Conneau, G Lample, R Rinott, A Williams… - arXiv preprint arXiv …, 2018"],"snippet":"… on the word translation task. In this paper, we pretrain our embeddings using the common-crawl word embeddings (Grave et al., 2018) aligned with the MUSE library of Conneau et al. (2018b). 4.2.2 Universal Multilingual Sentence …","url":["https://arxiv.org/pdf/1809.05053"]} +{"year":"2018","title":"YouTube AV 50K: an Annotated Corpus for Comments in Autonomous Vehicles","authors":["T Li, L Lin, M Choi, K Fu, S Gong, J Wang - arXiv preprint arXiv:1807.11227, 2018"],"snippet":"Page 1. YouTube AV 50K: an Annotated Corpus for Comments in Autonomous Vehicles Tao Li Department of Computer Science Purdue University West Lafayette, IN 47907 Email: taoli@purdue.edu Kaiming Fu Weldon School …","url":["https://arxiv.org/pdf/1807.11227"]} +{"year":"2018","title":"Zero-Shot Object Detection by Hybrid Region Embedding","authors":["B Demirel, RG Cinbis, N Ikizler-Cinbis - arXiv preprint arXiv:1805.06157, 2018"],"snippet":"… 4.2 Class Embeddings For the Fashion-ZSD dataset, we generate 300-dimensional GloVe word embedding vectors [31] for each class name, using Common Crawl Data1. For the class names that contain multiple words, we take the average of the word vectors …","url":["https://arxiv.org/pdf/1805.06157"]} +{"year":"2018","title":"Zewen at SemEval-2018 Task 1: An Ensemble Model for Affect Prediction in Tweets","authors":["Z Chi, H Huang, J Chen, H Wu, R Wei - … of The 12th International Workshop on …, 2018"],"snippet":"… GloVe (Pennington et al., 2014) trained by Common Crawl … the same model hyperparameters which are listed in Table 1 and Table 2. Also, the four methods use the same word em- beddings, which is a pre-trained …","url":["http://www.aclweb.org/anthology/S18-1046"]}