{"year":"2016","title":"A Case Study of Complex Graph Analysis in Distributed Memory: Implementation and Optimization","authors":["GM Slota, S Rajamanickam, K Madduri"],"snippet":"... Focusing on one of the largest publicly-available hyperlink graphs (the 2012 Web Data Commons graph1, which was in- turn extracted from the open Common Crawl web corpus2), we develop parallel ... 1http://webdatacommons.org/hyperlinkgraph/ 2http://commoncrawl.org ...","url":["http://www.personal.psu.edu/users/g/m/gms5016/pub/Dist-IPDPS16.pdf"]} {"year":"2016","title":"A Convolutional Encoder Model for Neural Machine Translation","authors":["J Gehring, M Auli, D Grangier, YN Dauphin - arXiv preprint arXiv:1611.02344, 2016"],"snippet":"... WMT'15 English-German. We use all available parallel training data, namely Europarl v7, Common Crawl and News Commentary v10 and apply the standard Moses tokenization to obtain 3.9M sentence pairs (Koehn et al., 2007). We report results on newstest2015. ...","url":["https://arxiv.org/pdf/1611.02344"]} {"year":"2016","title":"A Deep Fusion Model for Domain Adaptation in Phrase-based MT","authors":["N Durrani, S Joty, A Abdelali, H Sajjad"],"snippet":"... test-13 993 18K 17K test-13 1169 26K 28K Table 1: Statistics of the English-German and Arabic-English training corpora in terms of Sentences and Tokens (represented in millions). ep = Europarl, cc = Common Crawl, un = United Nations ...","url":["https://www.aclweb.org/anthology/C/C16/C16-1299.pdf"]} {"year":"2016","title":"A Large DataBase of Hypernymy Relations Extracted from the Web","authors":["J Seitner, C Bizer, K Eckert, S Faralli, R Meusel… - … of the 10th edition of the …, 2016"],"snippet":"... 3http://webdatacommons.org/framework/ 4http://commoncrawl.org ... The corpus is provided by the Common Crawl Foundation on AWS S3 as free download.6 The extraction of ... isadb/) and can be used to repeat the tuple extraction for different or newer Common Crawl releases. ...","url":["http://webdatacommons.org/isadb/lrec2016.pdf"]} {"year":"2016","title":"A Maturity Model for Public Administration as Open Translation Data Providers","authors":["N Bel, ML Forcada, A Gómez-Pérez - arXiv preprint arXiv:1607.01990, 2016"],"snippet":"... There are techniques to mitigate the need of large quantities of parallel text, but most often at the expense of resulting translation quality. As a reference of the magnitude we can take as a standard corpus the Common Crawl corpus (Smith et al. ...","url":["http://arxiv.org/pdf/1607.01990"]} {"year":"2016","title":"A Neural Architecture Mimicking Humans End-to-End for Natural Language Inference","authors":["B Paria, KM Annervaz, A Dukkipati, A Chatterjee… - arXiv preprint arXiv: …, 2016"],"snippet":"... We used batch normalization [Ioffe and Szegedy, 2015] while training. The various model parameters used are mentioned in Table I. We experimented with both GloVe vectors trained1 on Common Crawl dataset as well as Word2Vec vector trained2 on Google news dataset. ...","url":["https://arxiv.org/pdf/1611.04741"]} {"year":"2016","title":"A practical guide to big data research in psychology.","authors":["EE Chen, SP Wojcik - Psychological Methods, 2016"],"snippet":"... as well as general collections, such as Amazon Web Services' Public Data Sets repository (AWS, nd, http://aws.amazon.com/public-data-sets/) which includes the 1000 Genomes Project, with full genomic sequences for 1,700 individuals, and the Common Crawl Corpus, with ...","url":["http://psycnet.apa.org/journals/met/21/4/458/"]} {"year":"2016","title":"A semantic based Web page classification strategy using multi-layered domain ontology","authors":["AI Saleh, MF Al Rahmawy, AE Abulwafa - World Wide Web, 2016"],"snippet":"Page 1. A semantic based Web page classification strategy using multi-layered domain ontology Ahmed I. Saleh1 & Mohammed F. Al Rahmawy2 & Arwa E. Abulwafa1 Received: 3 February 2016 /Revised: 13 August 2016 /Accepted ...","url":["http://link.springer.com/article/10.1007/s11280-016-0415-z"]} {"year":"2016","title":"A Story of Discrimination and Unfairness","authors":["A Caliskan-Islam, J Bryson, A Narayanan"],"snippet":"... power has led to high quality language models such as word2vec [7] and GloVe [8]. These language models, which consist of up to half a million unique words, are trained on billions of documents from sources such as Wikipedia, CommonCrawl, GoogleNews, and Twitter. ...","url":["https://www.securityweek2016.tu-darmstadt.de/fileadmin/user_upload/Group_securityweek2016/pets2016/9_a_story.pdf"]} {"year":"2016","title":"A Way out of the Odyssey: Analyzing and Combining Recent Insights for LSTMs","authors":["S Longpre, S Pradhan, C Xiong, R Socher - arXiv preprint arXiv:1611.05104, 2016"],"snippet":"... All models in this paper used publicly available 300 dimensional word vectors, pre-trained using Glove on 840 million tokens of Common Crawl Data (Pennington et al., 2014), and both the word vectors and the subsequent weight matrices were trained using Adam with a ...","url":["https://arxiv.org/pdf/1611.05104"]} {"year":"2016","title":"A Web Application to Search a Large Repository of Taxonomic Relations from the Web","authors":["S Faralli, C Bizer, K Eckert, R Meusel, SP Ponzetto"],"snippet":"... 1 https://commoncrawl.org 2 http://webdatacommons.org/framework/ 3 https://www.mongodb. com ... of the two noun phrases involved in the isa relations into pre-modifiers, head and post-modifiers [6], as well as the frequency of occurrence of the relation in the Common Crawl...","url":["http://ceur-ws.org/Vol-1690/paper58.pdf"]} {"year":"2016","title":"Abu-MaTran at WMT 2016 Translation Task: Deep Learning, Morphological Segmentation and Tuning on Character Sequences","authors":["VM Sánchez-Cartagena, A Toral - Proceedings of the First Conference on Machine …, 2016"],"snippet":"... 362 Page 2. Corpus Sentences (k) Words (M) Europarl v8 2 121 39.5 Common Crawl 113 995 2 416.7 News Crawl 2014–15 6 741 83.1 Table 1: Finnish monolingual data, after preprocessing, used to train the LMs of our SMT submission. ...","url":["http://www.aclweb.org/anthology/W/W16/W16-2322.pdf"]} {"year":"2016","title":"Action Classification via Concepts and Attributes","authors":["A Rosenfeld, S Ullman - arXiv preprint arXiv:1605.07824, 2016"],"snippet":"... To assign GloVe [19] vectors to object names or attributes, we use the pre-trained model on the Common-Crawl (42B) corpus, which contains a vocabulary of 1.9M words. We break up phrases into their words and assign to them their mean GloVe vector. ...","url":["http://arxiv.org/pdf/1605.07824"]} {"year":"2016","title":"Active Content-Based Crowdsourcing Task Selection","authors":["P Bansal, C Eickhoff, T Hofmann"],"snippet":"... Pennington et. al. [28] showed distributed text representations to capture more semantic information when the models are trained on Wikipedia text, as opposed to other large corpora such as the Common Crawl. This is attributed ...","url":["https://www.researchgate.net/profile/Piyush_Bansal4/publication/305442609_Active_Content-Based_Crowdsourcing_Task_Selection/links/578f416d08ae81b44671ad85.pdf"]} {"year":"2016","title":"Adverse Drug Reaction Classification With Deep Neural Networks","authors":["T Huynh, Y He, A Willis, S Rüger"],"snippet":"... 4http://commoncrawl.org/ 5Source code is available at https://github.com/trunghlt/ AdverseDrugReaction 879 Page 4. max pooling feedforward layer convolutional layer (a) Convolutional Neural Network (CNN) (b) Recurrent Convolutional Neural Network (RCNN) ...","url":["http://www.aclweb.org/anthology/C/C16/C16-1084.pdf"]} {"year":"2016","title":"All Your Data Are Belong to us. European Perspectives on Privacy Issues in 'Free'Online Machine Translation Services","authors":["P Kamocki, J O'Regan, M Stauch - Privacy and Identity Management. Time for a …, 2016"],"snippet":"... http://​www.​cnet.​com/​news/​google-translate-now-serves-200-million-people-daily/​. Accessed 23 Oct 2014. Smith, JR, Saint-Amand, H., Plamada, M., Koehn, P., Callison-Burch, C., Lopez, A.: Dirt cheap web-scale parallel text from the Common Crawl...","url":["http://link.springer.com/chapter/10.1007/978-3-319-41763-9_18"]} {"year":"2016","title":"An Analysis of Real-World XML Queries","authors":["P Hlísta, I Holubová - OTM Confederated International Conferences\" On the …, 2016"],"snippet":"... crawler. Or, there is another option – Common Crawl [1], an open repository of web crawled data that is universally accessible and analyzable, containing petabytes of data collected over the last 7 years. ... 3.1 Common Crawl. We ...","url":["http://link.springer.com/chapter/10.1007/978-3-319-48472-3_36"]} {"year":"2016","title":"An Attentive Neural Architecture for Fine-grained Entity Type Classification","authors":["S Shimaoka, P Stenetorp, K Inui, S Riedel - arXiv preprint arXiv:1604.05525, 2016"],"snippet":"... appearing in the training set. Specifically, we used the freely available 300 dimensional cased word embeddings trained on 840 billion to- kens from the Common Crawl supplied by Pennington et al. (2014). As embeddings ...","url":["http://arxiv.org/pdf/1604.05525"]} {"year":"2016","title":"Analysing Structured Scholarly Data Embedded in Web Pages","authors":["P Sahoo, U Gadiraju, R Yu, S Saha, S Dietze"],"snippet":"... the following section. 2.2 Methodology and Dataset For our investigation, we use the Web Data Commons (WDC) dataset, being the largest available corpus of markup, extracted from the Common Crawl. Of the crawled web ...","url":["http://cs.unibo.it/save-sd/2016/papers/pdf/sahoo-savesd2016.pdf"]} {"year":"2016","title":"ArabicWeb16: A New Crawl for Today's Arabic Web","authors":["R Suwaileh, M Kutlu, N Fathima, T Elsayed, M Lease"],"snippet":"... English content dominates the crawl [12]. While Common Crawl could be mined to identify and ex- tract a useful Arabic subset akin to ArClueWeb09, this would address only recency, not coverage. To address the above concerns ...","url":["http://www.ischool.utexas.edu/~ml/papers/sigir16-arabicweb.pdf"]} {"year":"2016","title":"Ask Your Neurons: A Deep Learning Approach to Visual Question Answering","authors":["M Malinowski, M Rohrbach, M Fritz - arXiv preprint arXiv:1605.02697, 2016"],"snippet":"Page 1. Noname manuscript Ask Your Neurons: A Deep Learning Approach to Visual Question Answering Mateusz Malinowski · Marcus Rohrbach · Mario Fritz Abstract We address a question answering task on realworld images that is set up as a Visual Turing Test. ...","url":["http://arxiv.org/pdf/1605.02697"]} {"year":"2016","title":"Automated Generation of Multilingual Clusters for the Evaluation of Distributed Representations","authors":["P Blair, Y Merhav, J Barry - arXiv preprint arXiv:1611.01547, 2016"],"snippet":"... (2013a), the 840-billion token Common Crawl corpus-trained GloVe model released by Pennington et al. (2014), and the English, Spanish, German, Japanese, and Chinese MultiCCA vectors5 from Ammar et al. ... Outliers OOV GloVe Common Crawl 75.53 38.57 5 6.33 5.70 ...","url":["https://arxiv.org/pdf/1611.01547"]} {"year":"2016","title":"Automated Haiku Generation based on Word Vector Models","authors":["AF Aji"],"snippet":"... and Page 28. 16 Chapter 3. Design Common Crawl data. Those data also come with various vector dimension size from 50-D to 300-D. Those pre-trained word vectors are used directly for this project as they take considerably ...","url":["http://project-archive.inf.ed.ac.uk/msc/20150275/msc_proj.pdf"]} {"year":"2016","title":"Automatic Construction of Morphologically Motivated Translation Models for Highly Inflected, Low-Resource Languages","authors":["J Hewitt, M Post, D Yarowsky - AMTA 2016, Vol., 2016"],"snippet":"... sentences of Europarl (Koehn, 2005), SETIMES3 (Tyers and Alperen, 2010), extracted from OPUS (Tiedemann, 2009), or Common Crawl (Bojar et al ... Turkish, we train models on 29000 sentences of biblical data with 1000 and 20000 sentences of CommonCrawl and SETIMES ...","url":["https://www.researchgate.net/profile/John_Ortega3/publication/309765044_Fuzzy-match_repair_using_black-box_machine_translation_systems_what_can_be_expected/links/5822496f08ae7ea5be6af317.pdf#page=183"]} {"year":"2016","title":"B1A3D2 LUC@ WMT 2016: a Bilingual1 Document2 Alignment3 Platform Based on Lucene","authors":["L Jakubina, P Langlais"],"snippet":"... 2013. Dirt cheap web-scale parallel text from the common crawl. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1374–1383. Jakob Uszkoreit, Jay M. Ponte, Ashok C. Popat, and Moshe Dubiner. 2010. ...","url":["http://www-etud.iro.umontreal.ca/~jakubinl/publication/badluc_jaklan_wmt16_stbad.pdf"]} {"year":"2016","title":"Big Data Facilitation and Management","authors":["J Fagerli"],"snippet":"Page 1. Faculty of Science and Technology Department of Computer Science Big Data Facilitation and Management A requirements analysis and initial evaluation of a big biological data processing service — Jarl Fagerli INF ...","url":["http://bdps.cs.uit.no/papers/capstone-jarl.pdf"]} {"year":"2016","title":"Bootstrap, Review, Decode: Using Out-of-Domain Textual Data to Improve Image Captioning","authors":["W Chen, A Lucchi, T Hofmann - arXiv preprint arXiv:1611.05321, 2016"],"snippet":"... We report the performance of our model and competing methods in terms of six standard metrics used for image captioning as described in [4]. During the bootstrap learning phase, we use both the 20082010 News-CommonCrawl and Europarl corpus 2 as out- of-domain ...","url":["https://arxiv.org/pdf/1611.05321"]} {"year":"2016","title":"bot. zen@ EVALITA 2016-A minimally-deep learning PoS-tagger (trained for Italian Tweets)","authors":["EW Stemle"],"snippet":"... The data was only distributed to the task participants. 4.1.4 C4Corpus (w2v) c4corpus8 is a full documents Italian Web corpus that has been extracted from CommonCrawl, the largest publicly available general Web crawl to date. ...","url":["http://ceur-ws.org/Vol-1749/paper_020.pdf"]} {"year":"2016","title":"Building mutually beneficial relationships between question retrieval and answer ranking to improve performance of community question answering","authors":["M Lan, G Wu, C Xiao, Y Wu, J Wu - Neural Networks (IJCNN), 2016 International Joint …, 2016"],"snippet":"... The first is the 300-dimensional version of word2vec [23] vectors, which is trained on part of Google News dataset (about 100 billion words). The second is 300-dimensional Glove vectors [24] which is trained on 840 billion tokens of Common Crawl data. ...","url":["http://ieeexplore.ieee.org/abstract/document/7727286/"]} {"year":"2016","title":"C4Corpus: Multilingual Web-size corpus with free license","authors":["I Habernal, O Zayed, I Gurevych"],"snippet":"... documents. Our project is entitled C4Corpus, an abbreviation of Creative Commons from Common Crawl Corpus and is hosted under the DKPro umbrella4 at https:// github.com/dkpro/dkpro-c4corpus under ASL 2.0 license. ...","url":["https://www.ukp.tu-darmstadt.de/fileadmin/user_upload/Group_UKP/publikationen/2016/lrec2016-c4corpus-camera-ready.pdf"]} {"year":"2016","title":"Capturing Pragmatic Knowledge in Article Usage Prediction using LSTMs","authors":["J Kabbara, Y Feng, JCK Cheung"],"snippet":"... GloVe: The embedding is initialized by the global vectors Pennington et al. (2014) that are trained on the Common Crawl corpus (840 billion tokens). Both word2vec and GloVe word embeddings consist of 300 dimensions. ...","url":["https://www.aclweb.org/anthology/C/C16/C16-1247.pdf"]} {"year":"2016","title":"Character-level and Multi-channel Convolutional Neural Networks for Large-scale Authorship Attribution","authors":["S Ruder, P Ghaffari, JG Breslin - arXiv preprint arXiv:1609.06686, 2016"],"snippet":"... σ: Standard deviation of document number. d: Median document size (tokens). All word embedding channels are initialized with 300-dimensional GloVe vectors (Pennington et al., 2014) trained on 840B tokens of the Common Crawl corpus11. ...","url":["http://arxiv.org/pdf/1609.06686"]} {"year":"2016","title":"Citation Classification for Behavioral Analysis of a Scientific Field","authors":["D Jurgens, S Kumar, R Hoover, D McFarland… - arXiv preprint arXiv: …, 2016"],"snippet":"... The classifier is implemented using SciKit (Pedregosa et al., 2011) and syntactic processing was done using CoreNLP (Manning et al., 2014). Selectional preferences used pretrained 300-dimensional vectors from the 840B token Common Crawl (Pennington et al., 2014). ...","url":["http://arxiv.org/pdf/1609.00435"]} {"year":"2016","title":"CNRC at SemEval-2016 Task 1: Experiments in crosslingual semantic textual similarity","authors":["C Lo, C Goutte, M Simard - Proceedings of SemEval, 2016"],"snippet":"... The system was 3We use the glm function in R. 669 Page 3. trained using standard resources – Europarl, Common Crawl (CC) and News & Commentary (NC) – totaling approximately 110M words in each language. Phrase ...","url":["http://anthology.aclweb.org/S/S16/S16-1102.pdf"]} {"year":"2016","title":"Commonsense Knowledge Base Completion","authors":["X Li, A Taheri, L Tu, K Gimpel"],"snippet":"... We use the GloVe (Pennington et al., 2014) embeddings trained on 840 billion tokens of Common Crawl web text and the PARAGRAM-SimLex embeddings of Wieting et al. (2015), which were tuned to have strong performance on the SimLex-999 task (Hill et al., 2015). ...","url":["http://ttic.uchicago.edu/~kgimpel/papers/li+etal.acl16.pdf"]} {"year":"2016","title":"Comparing Topic Coverage in Breadth-First and Depth-First Crawls Using Anchor Texts","authors":["AP de Vries - Research and Advanced Technology for Digital …, 2016","T Samar, MC Traub, J van Ossenbruggen, AP de Vries - International Conference on …, 2016"],"snippet":"... nl domain, with the goal to crawl websites as completes as possible. The second crawl was collected by the Common Crawl foundation using a breadth-first strategy on the entire Web, this strategy focuses on discovering as many links as possible. ...","url":["http://books.google.de/books?hl=en&lr=lang_en&id=VmTUDAAAQBAJ&oi=fnd&pg=PA133&dq=%22common+crawl%22&ots=STVgD4vke3&sig=Gr5Q94wWtvFSfT_EYf1cQGP-Mrg","http://link.springer.com/chapter/10.1007/978-3-319-43997-6_11"]} {"year":"2016","title":"COMPARISON OF DISTRIBUTIONAL SEMANTIC MODELS FOR RECOGNIZING TEXTUAL ENTAILMENT.","authors":["Y WIBISONO, DWIH WIDYANTORO… - Journal of Theoretical & …, 2016"],"snippet":"... To our knowledge, this paper is the first study of various DSM on RTE. We found that DSM improves entailment accuracy, with the best DSM is GloVe trained with 42 billion tokens taken from Common Crawl corpus. ... Glove_42B Common Crawl 42 billion tokens ...","url":["http://search.ebscohost.com/login.aspx?direct=true&profile=ehost&scope=site&authtype=crawler&jrnl=19928645&AN=120026939&h=neaFgJXHcv5SjyzIFWIJp046Uq5Cr3qfiPCmXc4DYTEi9kN6SN9YQqm1CUdjmDg%2BwZzzXWI6ftJLniJiB6Go1g%3D%3D&crl=c"]} {"year":"2016","title":"ConceptNet 5.5: An Open Multilingual Graph of General Knowledge","authors":["R Speer, J Chin, C Havasi - arXiv preprint arXiv:1612.03975, 2016"],"snippet":"... 2013), and the GloVe 1.2 embeddings trained on 840 billion words of the Common Crawl (Pennington, Socher, and Manning 2014). These matrices are downloadable, and we will be using them both as a point of comparison and as inputs to an ensemble. ...","url":["https://arxiv.org/pdf/1612.03975"]} {"year":"2016","title":"Content Selection through Paraphrase Detection: Capturing different Semantic Realisations of the Same Idea","authors":["E Lloret, C Gardent - WebNLG 2016, 2016"],"snippet":"... either sentences or pred-arg structures, GLoVe pre-trained WE vectors (Pennington et al., 2014) were used, specifically the ones derived from Wikipedia 2014+ Gi- gaword 5 corpora, containing around 6 billion to- kens; and the ones derived from a Common Crawl, with 840 ...","url":["https://webnlg2016.sciencesconf.org/data/pages/book.pdf#page=33"]} {"year":"2016","title":"Corporate Smart Content Evaluation","authors":["R Schäfermeier, AA Todor, A La Fleur, A Hasan… - 2016"],"snippet":"Page 1. Fraunhofer FOKUS FRAUNHOFER INSTITUTE FOR OPEN COMMUNICATION SYSTEMS FOKUS STUDY – CORPORATE SMART CONTENT EVALUATION Page 2. Page 3. STUDY – CORPORATE SMART CONTENT EVALUATION ...","url":["http://www.diss.fu-berlin.de/docs/servlets/MCRFileNodeServlet/FUDOCS_derivate_000000006523/CSCStudie2016.pdf"]} {"year":"2016","title":"Crawl and crowd to bring machine translation to under-resourced languages","authors":["A Toral, M Esplá-Gomis, F Klubička, N Ljubešić… - Language Resources and …"],"snippet":"... Wikipedia. The CommonCrawl project 5 should be mentioned here as it allows researchers to traverse a frequently updated crawl of the whole web in search of specific data, and therefore bypass the data collection process. ...","url":["http://link.springer.com/article/10.1007/s10579-016-9363-6"]} {"year":"2016","title":"Cross Site Product Page Classification with Supervised Machine Learning","authors":["J HUSS"],"snippet":"... An other data set used often is Common Crawl [1], which is a possible source that contain product specification pages. The data of Common Crawl is not complete with HTML-source code and it was collected in 2013, which creates many dead links. ...","url":["http://www.nada.kth.se/~ann/exjobb/jakob_huss.pdf"]} {"year":"2016","title":"CSA++: Fast Pattern Search for Large Alphabets","authors":["S Gog, A Moffat, M Petri - arXiv preprint arXiv:1605.05404, 2016"],"snippet":"... The latter were extracted from a sentence-parsed prefix of the German and Spanish sections of the CommonCrawl5. The four 200 ... translation process described by Shareghi et al., corresponding to 40,000 sentences randomly selected from the German part of Common Crawl...","url":["http://arxiv.org/pdf/1605.05404"]} {"year":"2016","title":"CUNI-LMU Submissions in WMT2016: Chimera Constrained and Beaten","authors":["A Tamchyna, R Sudarikov, O Bojar, A Fraser - Proceedings of the First Conference on …, 2016"],"snippet":"... tag). Our input is factored and contains the form, lemma, morphological tag, 1http://commoncrawl.org/ 387 Page 4. lemma ... 2015. The second LM only uses 4-grams but additionally contains the full Common Crawl corpus. We ...","url":["http://www.aclweb.org/anthology/W/W16/W16-2325.pdf"]} {"year":"2016","title":"D6. 3: Improved Corpus-based Approaches","authors":["CP Escartin, LS Torres, CO UoW, AZ UMA, S Pal - 2016"],"snippet":"... Based on this system and the data retrieved from Common Crawl, several websites were identified as possible candidates for crawling. ... 8http://commoncrawl.org/ 9For a description of this tool, see Section 3.1.2 in this Deliverable. 6 Page 9. ...","url":["http://expert-itn.eu/sites/default/files/outputs/expert_d6.3_20160921_improved_corpus-based_approaches.pdf"]} {"year":"2016","title":"Data Selection for IT Texts using Paragraph Vector","authors":["MS Duma, W Menzel - Proceedings of the First Conference on Machine …, 2016"],"snippet":"... models/doc2vec.html 3http://commoncrawl.org/ 4https://github.com/melix/jlangdetect 5-gram LMs using the SRILM toolkit (Stolcke, 2002) with Kneser-Ney discounting (Kneser and Ney, 1995) on the target side of the Commoncrawl and IT corpora. ...","url":["http://www.aclweb.org/anthology/W/W16/W16-2331.pdf"]} {"year":"2016","title":"David W. Embley, Mukkai S. Krishnamoorthy, George Nagy &","authors":["S Seth"],"snippet":"... tabulated data on the web even before Big Data became a byword [1]. Assuming “that an average table contains on average 50 facts it is possible to extract more than 600 billion facts taking into account only the 12 billion sample tables found in the Common Crawl” [2]. Tables ...","url":["https://www.ecse.rpi.edu/~nagy/PDF_chrono/2016_Converting%20Web%20Tables,IJDAR,%2010.1007_s10032-016-0259-1.pdf"]} {"year":"2016","title":"Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translatin","authors":["J Zhou, Y Cao, X Wang, P Li, W Xu - arXiv preprint arXiv:1606.04199, 2016"],"snippet":"... 4.1 Data sets For both tasks, we use the full WMT'14 parallel corpus as our training data. The detailed data sets are listed below: • English-to-French: Europarl v7, Common Crawl, UN, News Commentary, Gigaword • English-to-German: Europarl v7, Common