{"year":"2024","title":"1-Diffractor: Efficient and Utility-Preserving Text Obfuscation Leveraging Word-Level Metric Differential Privacy","authors":["S Meisenbacher, M Chevli, F Matthes - arXiv preprint arXiv:2405.01678, 2024"],"snippet":"The study of privacy-preserving Natural Language Processing (NLP) has gained rising attention in recent years. One promising avenue studies the integration of Differential Privacy in NLP, which has brought about innovative methods in a variety …","url":["https://arxiv.org/pdf/2405.01678"]} {"year":"2024","title":"101 Billion Arabic Tokens Dataset","authors":["M Aloui, H Chouikhi, G Chaabane, H Kchaou…"],"snippet":"… , we heavily depend on Common Crawl (1) as our primary source of web-scraped content. Common Crawl provides an accessible web archive … a Dataset Pipeline expressed in the following figure (1) aimed at cleaning up the web-extracted text …","url":["https://www.researchgate.net/profile/Hasna-Chouikhi/publication/380396476_101_Billion_Arabic_Words_Dataset/links/663a610306ea3d0b742f3141/101-Billion-Arabic-Words-Dataset.pdf"]} {"year":"2024","title":"101 Billion Arabic Words Dataset","authors":["M Aloui, H Chouikhi, G Chaabane, H Kchaou… - arXiv preprint arXiv …, 2024"],"snippet":"… We undertook a large-scale data mining project, extracting a substantial volume of text from the Common Crawl WET files, specifically targeting Arabic content. The extracted data underwent a rigorous cleaning and deduplication process, using …","url":["https://arxiv.org/pdf/2405.01590"]} {"year":"2024","title":"2 Text Preprocessing Method","authors":["J Peng, S Huo"],"snippet":"This paper uses easy data augmentation (EDA) and back translation for text enhancement. After text enhancement, the number of samples in the dataset can be significantly increased, which is helpful for model parameters through training and …","url":["https://journals.riverpublishers.com/index.php/JWE/article/download/25003/19983?inline=1"]} {"year":"2024","title":"4 CHATGPT IN ACADEMIA: AN IN-DEPTH EXPLORATION OF STUDENT VIEWS–PROS AND CONS","authors":["D Rajkumar, P Murugeswari, M Karthigaieswari - TRANSDISCIPLINARY THREADS"],"snippet":"… GPT-3’s training data came from five existing datasets: • Common Crawl: A collection of text pulled from billions of web pages containing trillions of words. OpenAI filtered it for high-quality reference material only. • WebText2: OpenAI …","url":["https://www.researchgate.net/profile/Namrata-Shrivastava/publication/378011746_TRANSDISCIPLINARY_THREADS_CRAFTING_THE_FUTURE_THROUGH_MULTIDISCIPLINARY_RESEARCH_VOLUME_-2_httpswwwamazonindp9392917295ref_cm_sw_r_apan_dp_DF7WRSTHCX72MEDVWPB2languageen-IN/links/65c33ae71e1ec12eff78f577/TRANSDISCIPLINARY-THREADS-CRAFTING-THE-FUTURE-THROUGH-MULTIDISCIPLINARY-RESEARCH-VOLUME-2-https-wwwamazonin-dp-9392917295ref-cm-sw-r-apan-dp-DF7WRSTHCX72MEDVWPB2-languageen-IN.pdf#page=34"]} {"year":"2024","title":"6 Performance Analysis","authors":["A Safi, S Singh, G Kaur, T Kaur - Intelligent Security Solutions for Cyber-Physical …, 2024"],"snippet":"… The dataset used in this research was gathered from Alexa and the Common Crawl archive. … from a variety of sources, including Common Crawl, Alexa, and PhishTank for the phishing … They used three datasets, including real URLs …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=1VUIEQAAQBAJ&oi=fnd&pg=PA89&dq=commoncrawl&ots=EsWoOABs6N&sig=ot4zCN0ozxE0H1-OW-AyC-mvYJA"]} {"year":"2024","title":"\" Previously on...\" From Recaps to Story Summarization","authors":["AK Singh, D Srivastava, M Tapaswi - arXiv preprint arXiv:2405.11487, 2024"],"snippet":"We introduce multimodal story summarization by leveraging TV episode recaps - short video sequences interweaving key story moments from previous episodes to bring viewers up to speed. We propose PlotSnap, a dataset featuring two crime …","url":["https://arxiv.org/pdf/2405.11487"]} {"year":"2024","title":"“More than Words”: A Legal Approach to the Risks of Commercial Chatbots Powered by Generative Artificial Intelligence","authors":["S Migliorini - European Journal of Risk Regulation, 2024"],"snippet":"… In the training of ChatGPT, this “clean”19 version of the Common Crawl has been complemented and filtered with the addition of a few … also that the training gave more prominence to the “clean” Common Crawl.Although this explanation may be …","url":["https://www.cambridge.org/core/services/aop-cambridge-core/content/view/4EB4DD9997211B81283EF7B34299E254/S1867299X24000047a.pdf/div-class-title-more-than-words-a-legal-approach-to-the-risks-of-commercial-chatbots-powered-by-generative-artificial-intelligence-div.pdf"]} {"year":"2024","title":"A Balancing Act: Data Protection Compliance of Artificial Intelligence","authors":["M Bartels - GRUR International, 2024"],"snippet":"Neural network based generative artificial intelligence (GenAI) systems, in particular large language models (LLMs), hold huge potential for economic, scientific, and artistic progress. However, they are also the subject of ongoing fierce data protection …","url":["https://academic.oup.com/grurint/advance-article/doi/10.1093/grurint/ikae060/7671464"]} {"year":"2024","title":"A bilingual benchmark for evaluating large language models","authors":["M Alkaoud - PeerJ Computer Science, 2024"],"snippet":"This work introduces a new benchmark for the bilingual evaluation of large language models (LLMs) in English and Arabic. While LLMs have transformed various fields, their evaluation in Arabic remains limited. This work addresses this …","url":["https://peerj.com/articles/cs-1893/"]} {"year":"2024","title":"A Canary in the AI Coal Mine: American Jews May Be Disproportionately Harmed by Intellectual Property Dispossession in Large Language Model Training","authors":["H Precel, A McDonald, B Hecht, N Vincent - arXiv preprint arXiv:2403.13073, 2024"],"snippet":"Systemic property dispossession from minority groups has often been carried out in the name of technological progress. In this paper, we identify evidence that the current paradigm of large language models (LLMs) likely continues this long history …","url":["https://arxiv.org/pdf/2403.13073"]} {"year":"2024","title":"A Comparative Analysis of Text Embedding Models for Bug Report Semantic Similarity","authors":["A Patil, K Han, A Jadon - 2024 11th International Conference on Signal …, 2024"],"snippet":"… For Fasttext, the ”crawl-300d-2M-subword” model was employed, which consisted of 2 million word vectors trained with subword information on the Common Crawl dataset, encompassing 600 billion tokens. In the case of Doc2Vec, the ”GoogleNews-vectors-negative300” …","url":["https://ieeexplore.ieee.org/abstract/document/10512000/"]} {"year":"2024","title":"A Comparative Analysis of Word Embeddings Techniques for Italian News Categorization","authors":["F Rollo, G Bonisoli, L Po - IEEE Access, 2024"],"snippet":"Text categorization remains a formidable challenge in information retrieval, requiring effective strategies, especially when applied to low-resource languages such as Italian. This paper delves into the intricacies of categorizing Italian news articles …","url":["https://ieeexplore.ieee.org/iel7/6287639/10380310/10439164.pdf"]} {"year":"2024","title":"A Comparative Study of ChatGPT and Seq2Seq Chatbot for Effective Students Advising","authors":["SK Assayed, M Alkhatib, K Shaalan - ChatGPT and Global Higher Education: Using …, 2024"],"snippet":"Artificial Intelligence (AI) chatbots are increasingly becoming a part of students’ lives, from school learning to university admissions and orientations. AI chatbots are finding ways to facilitate and improve learning experiences. Moreover, AI chatbots …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=Xy4KEQAAQBAJ&oi=fnd&pg=PA148&dq=commoncrawl&ots=xiIl7ODrch&sig=SjnxTdqq-f8-Iy1AhxxUa2Nt0lA"]} {"year":"2024","title":"A comparative study of cross-lingual sentiment analysis","authors":["P Přibáň, J Šmíd, J Steinberger, A Mištera - Expert Systems with Applications, 2024"],"snippet":"This paper presents a detailed comparative study of the zero-shot cross-lingual sentiment analysis. Namely, we use modern multilingual Transformer-based models and linear transformations combined with CNN and LSTM neural networks. We …","url":["https://www.sciencedirect.com/science/article/pii/S095741742400112X"]} {"year":"2024","title":"A Comparison of Pretrained Models for Classifying Issue Reports","authors":["J Heo, G Kwon, C Kwak, S Lee - IEEE Access, 2024"],"snippet":"Issues are evolving requirements which are the main factor that increase the cost of software evolution. To help developers manage issues, GitHub provides issue labeling mechanisms in issue management systems. However, manually labeling …","url":["https://ieeexplore.ieee.org/iel8/6287639/6514899/10546475.pdf"]} {"year":"2024","title":"A Comprehensive Review on Large Language Models","authors":["A Yadav - … Software Engineering Through AI, Federated Learning …, 2024"],"snippet":"In the realm of computer science and language, large language models (LLMs) stand out as remarkable tools of artificial intelligence (AI). Proficient in deciphering intricate language nuances, LLMs offer sensible responses and find applications in …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=sd0FEQAAQBAJ&oi=fnd&pg=PA17&dq=commoncrawl&ots=Mfcu5iTaq4&sig=eYkY6zLRcF4QgfOXXfp_HUxZO_4"]} {"year":"2024","title":"A contemporary review on chatbots, AI-powered virtual conversational agents, ChatGPT: Applications, open challenges and future research directions","authors":["A Casheekar, A Lahiri, K Rath, KS Prabhakar… - Computer Science Review, 2024"],"snippet":"This review paper offers an in-depth analysis of AI-powered virtual conversational agents, specifically focusing on OpenAI’s ChatGPT. The main contributions of this paper are threefold: (i) an exhaustive review of prior literature on chatbots, (ii) a …","url":["https://www.sciencedirect.com/science/article/pii/S1574013724000169"]} {"year":"2024","title":"A Copious Void: Rhetoric as Artificial Intelligence 1.0","authors":["A Hallsby - Rhetoric Society Quarterly, 2024"],"snippet":"Rhetoric is a trace retained in and by artificial intelligence (AI) technologies. This concept illuminates how rhetoric and AI have faced issues related to information abundance, entrenched social inequalities, discriminatory biases, and the …","url":["https://www.tandfonline.com/doi/pdf/10.1080/02773945.2024.2343265"]} {"year":"2024","title":"A Critical Analysis of the Largest Source for Generative AI Training Data: Common Crawl","authors":["S Baack - The 2024 ACM Conference on Fairness, Accountability …, 2024"],"snippet":"… among LLM builders about the implications of using Common Crawl’s data. This paper discusses what Common Crawl’s popularity for LLM development means … Our qualitative analysis is based on indepth interviews with Common Crawl staffers …","url":["https://dl.acm.org/doi/abs/10.1145/3630106.3659033"]} {"year":"2024","title":"A critical examination of document-level machine translation systems","authors":["P Nayak - 2024"],"snippet":"The need for accurate and effective translation cannot be overstated in an increasingly globalised world where communication is paramount. Bridging language barriers is important for promoting understanding and cooperation among …","url":["https://doras.dcu.ie/29345/1/19213962_final_thesis_upload.pdf"]} {"year":"2024","title":"A CURATEd CATalog: Rethinking the Extraction of Pretraining Corpora for Mid-Resourced Languages","authors":["J Palomar-Giner, JJ Saiz, F Espuña, M Mina, S Da Dalt… - Proceedings of the 2024 …, 2024"],"snippet":"We present and describe two language resources in this paper: CATalog 1.0, the largest text corpus in Catalan to date, and CURATE (Corpus Utility for RAting TExt), a modular, parallelizable pipeline used for processing and scoring documents …","url":["https://aclanthology.org/2024.lrec-main.31.pdf"]} {"year":"2024","title":"A Dataset for Pharmacovigilance in German, French, and Japanese: Annotating Adverse Drug Reactions across Languages","authors":["L Raithel, HS Yeh, S Yada, C Grouin, T Lavergne… - arXiv preprint arXiv …, 2024"],"snippet":"User-generated data sources have gained significance in uncovering Adverse Drug Reactions (ADRs), with an increasing number of discussions occurring in the digital world. However, the existing clinical corpora predominantly revolve around scientific …","url":["https://arxiv.org/pdf/2403.18336"]} {"year":"2024","title":"A Dataset for the Detection of Dehumanizing Language","authors":["P Engelmann, PB Trolle, C Hardmeier - arXiv preprint arXiv:2402.08764, 2024"],"snippet":"… The Common Crawl is an open repository of web crawled data, which features crawls from all … As the Common Crawl includes several petabytes of data in total, we have selectively ex… As we limit ourselves to English, the Common Crawl data …","url":["https://arxiv.org/pdf/2402.08764"]} {"year":"2024","title":"A Decade's Battle on Dataset Bias: Are We There Yet?","authors":["Z Liu, K He - arXiv preprint arXiv:2403.08632, 2024"],"snippet":"We revisit the \"dataset classification\" experiment suggested by Torralba and Efros a decade ago, in the new era with large-scale, diverse, and hopefully less biased datasets as well as more capable neural network architectures. Surprisingly, we …","url":["https://arxiv.org/pdf/2403.08632"]} {"year":"2024","title":"A field guide to using LLMs for online conflict analysis","authors":["E Borra, P Bollini, L Caroleo, B Cerigo, R Dubèl, F Gaw…"],"snippet":"… Two ways in which we studied LLMs was by looking at some of their open-source datasets, which one can sift through in WebText2, CommonCrawl and Wikipedia. Another is by studying autocompletions as perpetuations of standardized knowledge …","url":["https://www.digitalmethods.net/Dmi/FieldGuideToUsingLLMs"]} {"year":"2024","title":"A Framework for Embedding Entities in a Textual Narrative: a Case Study on Les Misérables","authors":["G Guex - 2023"],"snippet":"In this article, we propose a general and flexible framework in order to study narrative entities found in a literary work. This framework is exposed starting from a broad perspective, consisting in how to segment the work into textual units and …","url":["https://ceur-ws.org/Vol-3602/paper4.pdf"]} {"year":"2024","title":"A Framework for Preparing a Balanced and Comprehensive Phishing Dataset","authors":["I Skula, M Kvet - IEEE Access, 2024"],"snippet":"… • Choon Lin Tan (2018) [19] - dataset contains 5000 legitimate records sourced from Alexa and Common Crawl and 5000 phishing records from PhishTank and … Common Crawl (commoncrawl.org)- is a humongous web archive collected by …","url":["https://ieeexplore.ieee.org/iel7/6287639/6514899/10497090.pdf"]} {"year":"2024","title":"A Framework to Construct Financial Causality Knowledge Graph from Text","authors":["Z Xu, H Takamura, R Ichise - 2024 IEEE 18th International Conference on Semantic …, 2024"],"snippet":"Causality analysis holds a prominent role in finance, and the presentation of causality could offer valuable insights for risk mitigation, investment decisions, and portfolio optimization. Recent research has extensively investigated the identification …","url":["https://www.computer.org/csdl/proceedings-article/icsc/2024/853500a057/1Vwkcfe9osw"]} {"year":"2024","title":"A Galician Corpus for Misogyny Detection Online","authors":["LM Álvarez-Crespo, LM Castro - Proceedings of the 16th International Conference on …, 2024"],"snippet":"Social networks are virtual spaces where millions of people share ideas, opinions, and experiences. However, this broad social interaction also exposes negative and harmful behaviors, such as harassment and misogyny. Misogyny, particularly, is a …","url":["https://aclanthology.org/2024.propor-1.3.pdf"]} {"year":"2024","title":"A Generalize Hardware Debugging Approach for Large Language Models Semi-Syntectic Datasets","authors":["W Fu, S Li, Y Zhao, K Yang, X Zhang, Y Jin, X Guo"],"snippet":"… C4 specifically employs the Common Crawl [44] dataset, focusing on deduplication, removing non-English content, and filtering out offensive language. Redpajama combines sources from Wikipedia and StackExchange, similar to C4 …","url":["https://www.techrxiv.org/doi/pdf/10.36227/techrxiv.171527592.25632661"]} {"year":"2024","title":"A Holistic Review on Detection of Malicious Browser Extensions and Links using Deep Learning","authors":["T Zonta, M Sathiyanarayanan - 2024 IEEE 3rd International Conference on AI in …, 2024"],"snippet":"… Common Crawl [36] is a website which crawls over the web and suggest many datasets of legitimate website and their metadata namely … Available: https://commoncrawl.or g/2021/10/september-2021-crawlarchive-now-available/ [37] AC Bahnsen, EC …","url":["https://ieeexplore.ieee.org/abstract/document/10433842/"]} {"year":"2024","title":"A Hybrid Deep BiLSTM-CNN for Hate Speech Detection in Multi-social media","authors":["A Kumar, S Kumar, K Passi, A Mahanti - ACM Transactions on Asian and Low …, 2024"],"snippet":"Nowadays, ways of communication among people have changed due to advancements in information technology and the rise of online multi-social media. Many people express their feelings, ideas, and emotions on social media sites such …","url":["https://dl.acm.org/doi/pdf/10.1145/3657635"]} {"year":"2024","title":"A Japanese-Chinese Parallel Corpus Using Crowdsourcing for Web Mining","authors":["M Nagata, M Morishita, K Chousa, N Yasuda - arXiv preprint arXiv:2405.09017, 2024"],"snippet":"… In ParaCrawl, they determine which websites to crawl by analyzing the Common Crawl … In this study, we analyzed 12 sets of Common Crawl archives (104TB in total) published from … and bilingual website URLs obtained from Common Crawl …","url":["https://arxiv.org/pdf/2405.09017"]} {"year":"2024","title":"A Language Model Trained on Uruguayan Spanish News Text","authors":["JP Filevich, G Marco, S Castro, L Chiruzzo, A Rosá - LREC-COLING 2024, 2024"],"snippet":"This paper presents a language model trained from scratch exclusively on a brand new corpus consisting of about 6 GiB of Uruguayan newspaper text. We trained the model for 30 days on a single Nvidia P100 using the RoBERTa-base architecture …","url":["https://www.iris.unina.it/retrieve/684d559a-e9e4-4eb1-8a80-4a3d654fc7f2/book.v3%20LATEST.pdf#page=63"]} {"year":"2024","title":"A Large-Scale Exploration of $\\mu $-Transfer","authors":["L Lingle - arXiv preprint arXiv:2404.05728, 2024"],"snippet":"Large neural network models have become a mainstay of natural language processing and computer vision, yet their initialization and learning rates are set in a largely heuristic fashion, potentially varying from paper to paper and one model size …","url":["https://arxiv.org/html/2404.05728v1"]} {"year":"2024","title":"A Legal Framework for Natural Language Model Training in Portugal","authors":["R Almeida, E Amorim - Proceedings of the Workshop on Legal and Ethical …, 2024"],"snippet":"Recent advances in deep learning have promoted the advent of many computational systems capable of performing intelligent actions that, until then, were restricted to the human intellect. In the particular case of human languages, these …","url":["https://aclanthology.org/2024.legal-1.2.pdf"]} {"year":"2024","title":"A Legal Framework for Natural Language Processing Model Training in Portugal","authors":["R Almeida, E Amorim - arXiv preprint arXiv:2405.00536, 2024"],"snippet":"Recent advances in deep learning have promoted the advent of many computational systems capable of performing intelligent actions that, until then, were restricted to the human intellect. In the particular case of human languages, these …","url":["https://arxiv.org/pdf/2405.00536"]} {"year":"2024","title":"A Little Leak Will Sink a Great Ship: Survey of Transparency for Large Language Models from Start to Finish","authors":["M Kaneko, T Baldwin - arXiv preprint arXiv:2403.16139, 2024"],"snippet":"… The most common sources included in all LLMs are web page sources such as C4, CommonCrawl, and the Pile. Because they are collected from various web pages, there is a risk that they may contain personal information, copyrighted texts …","url":["https://arxiv.org/pdf/2403.16139"]} {"year":"2024","title":"A Longitudinal Study of Content Control Mechanisms","authors":["M Dinzinger, M Granitzer - Companion Proceedings of the ACM on Web …, 2024"],"snippet":"… Both resources, robots.txt and regular pages, were collected from the Common Crawl web archive 7 and processed through a customized … As Common Crawl is furhtermore not focused on particular web domains, but randomly samples the …","url":["https://dl.acm.org/doi/abs/10.1145/3589335.3651893"]} {"year":"2024","title":"A methodology for using players' chat content for dynamic difficulty adjustment in metaverse multiplayer games","authors":["MM Rezapour, A Fatemi, MA Nematbakhsh - Applied Soft Computing, 2024"],"snippet":"… RoBERTa, on the other hand, was trained on a much larger corpus of text that includes not only Wikipedia and BookCorpus but also Common Crawl, a dataset that contains billions of web pages. This larger and more diverse corpus of text allows …","url":["https://www.sciencedirect.com/science/article/pii/S1568494624002710"]} {"year":"2024","title":"A Multilingual Dataset for Investigating Stereotypes and Negative Attitudes Towards Migrant Groups in Large Language Models","authors":["D Sorato, CC Ventura, D Zavala-Rojas - … of the 16th International Conference on …, 2024"],"snippet":"… 2020), which is a cleaned version of the CommonCrawl corpus. … where stereotypes are presented in more subtle and/or strategic ways (when compared to social media/CommonCrawl texts) and the explicit discrimination of migrant groups …","url":["https://aclanthology.org/2024.propor-1.1.pdf"]} {"year":"2024","title":"A Multilingual Perspective on Probing Gender Bias","authors":["K Stańczak - arXiv preprint arXiv:2403.10699, 2024"],"snippet":"Gender bias represents a form of systematic negative treatment that targets individuals based on their gender. This discrimination can range from subtle sexist remarks and gendered stereotypes to outright hate speech. Prior research has …","url":["https://arxiv.org/pdf/2403.10699"]} {"year":"2024","title":"A New Massive Multilingual Dataset for High-Performance Language Technologies","authors":["O de Gibert, G Nail, N Arefyev, M Bañón… - arXiv preprint arXiv …, 2024"],"snippet":"We present the HPLT (High Performance Language Technologies) language resources, a new massive multilingual dataset including both monolingual and bilingual corpora extracted from CommonCrawl and previously unused web crawls …","url":["https://arxiv.org/pdf/2403.14009"]} {"year":"2024","title":"A Permutaion Importance Based feature selection method and Deep Learning Model to Detect Phishing Websites","authors":["R Zaimi, M Hafidi, L Mahnane - 2024"],"snippet":"Phishing attacks pose a significant and escalating threat to cybersecurity in recent times. This deceptive scam aims to trick naive users, luring them into visiting harmful websites and sharing sensitive information, including credentials, credit card …","url":["https://www.researchsquare.com/article/rs-3943049/latest.pdf"]} {"year":"2024","title":"A phenomenology and epistemology of large language models: Transparency, trust, and trustworthiness","authors":["R Heersmink, B de Rooij, MJC Vázquez, M Colombo"],"snippet":"… We know that its dataset includes the Common Crawl, which is a publicly available corpus of Webpages, including billions of Webpages … whether sources in the Common Crawl or Wikipedia are prioritised. This is again a serious issue to do …","url":["https://philpapers.org/archive/HEEAPA.pdf"]} {"year":"2024","title":"A Review of Current Trends, Techniques, and Challenges in Large Language Models (LLMs)","authors":["R Patil, V Gudivada - 2024"],"snippet":"… For low-resource languages, XLM models trained on CommonCrawl-100 did better than the ones trained using Wikipedia. Another example is mT5 [32], which uses multilingual corpus mC4 to pretrain the model. When dealing with multilingual …","url":["https://www.preprints.org/manuscript/202402.0357/download/final_file"]} {"year":"2024","title":"A Review of Multi-Modal Large Language and Vision Models","authors":["K Carolan, L Fennelly, AF Smeaton - arXiv preprint arXiv:2404.01322, 2024"],"snippet":"… The Falcon LLM team created the RefinedWed dataset, which is a massive web dataset based on CommonCrawl. TII focused on scaling and improving the quality of web data by employing large-scale de-duplication and strict filtering, resulting in …","url":["https://arxiv.org/pdf/2404.01322"]} {"year":"2024","title":"A Robust Approach to E-Banking Phishing Detection using Ensemble Methods and LSTM","authors":["NVS Reddy, ARR Saai, TV Ramanujan, NRK Reddy… - … International Conference on …, 2024"],"snippet":"… range of datasets from reputable community platforms like VirusTotal, Common Crawl, and PhishTank. This extensive dataset, spanning … Integration of Common Crawl further enriched the dataset, ensuring a diverse set of websites for effective …","url":["https://ieeexplore.ieee.org/abstract/document/10533883/"]} {"year":"2024","title":"A Shocking Amount of the Web is Machine Translated: Insights from Multi-Way Parallelism","authors":["B Thompson, MP Dhaliwal, P Frisch, T Domhan… - arXiv preprint arXiv …, 2024"],"snippet":"… 2021), which is in turn based on Common CrawlCommon Crawl is a long running web-scraping project which maintains a free, open source repository of web-scraped data. ccMatrix is created by embedding Common Crawl sentences into a …","url":["https://arxiv.org/pdf/2401.05749"]} {"year":"2024","title":"A Short Commentary on Trinh & Le (2018)","authors":["WS SABA"],"snippet":"… corpus (trained on the language model LM-1-Billion, CommonCrawl, and SQuAD), the probabilities of s1 and s2 appearing in a large corpus. The substitution that turns out to be more probable, is considered to be the more probable referent of “it”. There …","url":["https://www.academia.edu/download/108648524/1810.00521.pdf"]} {"year":"2024","title":"A Survey of AI-generated Text Forensic Systems: Detection, Attribution, and Characterization","authors":["T Kumarage, G Agrawal, P Sheth, R Moraffah… - arXiv preprint arXiv …, 2024"],"snippet":"We have witnessed lately a rapid proliferation of advanced Large Language Models (LLMs) capable of generating high-quality text. While these LLMs have revolutionized text generation across various domains, they also pose significant …","url":["https://arxiv.org/pdf/2403.01152"]} {"year":"2024","title":"A Survey of Multimodal Large Language Model from A Data-centric Perspective","authors":["T Bai, H Liang, B Wan, L Yang, B Li, Y Wang, B Cui… - arXiv preprint arXiv …, 2024"],"snippet":"… CommonCrawl project serves as the most commonly … , CommonCrawl’s vast archive of web pages, which includes numerous image-text pairs, has also become a vital resource for constructing multimodal pre-training datasets such as LAION-5B [236] …","url":["https://arxiv.org/pdf/2405.16640"]} {"year":"2024","title":"A Survey of Neural Machine Translation based on Knowledge Distillation","authors":["MA Chang, T Yonghong, Z Xiaoli, SUN Kangkang - Journal of Frontiers of Computer Science …"],"snippet":"Abstract: Machine Translation (MT) is the process of using a computer to convert a language into another language with the same semantics. With the introduction of neural network, neural machine translation (NMT), as a powerful machine …","url":["http://fcst.ceaj.org/EN/article/downloadArticleFile.do?attachType=PDF&id=3451"]} {"year":"2024","title":"A Survey of Resource-efficient LLM and Multimodal Foundation Models","authors":["M Xu, W Yin, D Cai, R Yi, D Xu, Q Wang, B Wu, Y Zhao… - arXiv preprint arXiv …, 2024"],"snippet":"Large foundation models, including large language models (LLMs), vision transformers (ViTs), diffusion, and LLM-based multimodal models, are revolutionizing the entire machine learning lifecycle, from training to deployment …","url":["https://arxiv.org/pdf/2401.08092"]} {"year":"2024","title":"A Survey of Web Content Control for Generative AI","authors":["M Dinzinger, F Heß, M Granitzer - arXiv preprint arXiv:2404.02309, 2024"],"snippet":"… The documents were sourced from the web archive of Common Crawl,16 offering a broad and randomly selected cross-section of the internet. These files were all collected in November and December 2023. Further details on the experimental …","url":["https://arxiv.org/pdf/2404.02309"]} {"year":"2024","title":"A Survey on Data Selection for Language Models","authors":["A Albalak, Y Elazar, SM Xie, S Longpre, N Lambert… - arXiv preprint arXiv …, 2024"],"snippet":"… They train a classifier using the “high-quality” reference corpora as the positive class and unfiltered Common Crawl documents as the negative class. This classifier then defines the utility metric for selecting data that follows the desired distribution …","url":["https://arxiv.org/pdf/2402.16827"]} {"year":"2024","title":"A Survey on Large Language Models with Multilingualism: Recent Advances and New Frontiers","authors":["K Huang, F Mo, H Li, Y Li, Y Zhang, W Yi, Y Mao, J Liu… - arXiv preprint arXiv …, 2024"],"snippet":"The rapid development of Large Language Models (LLMs) demonstrates remarkable multilingual capabilities in natural language processing, attracting global attention in both academia and industry. To mitigate potential discrimination …","url":["https://arxiv.org/pdf/2405.10936"]} {"year":"2024","title":"A Survey on Multilingual Large Language Models: Corpora, Alignment, and Bias","authors":["Y Xu, L Hu, J Zhao, Z Qiu, Y Ye, H Gu - arXiv preprint arXiv:2404.00929, 2024"],"snippet":"… A significant portion of these training data originates from multilingual repositories like Common Crawl, WikiPedia and Web documents, encompassing a broad range of languages. These multilingual repositories are crucial for enhancing the cross-lingual …","url":["https://arxiv.org/pdf/2404.00929"]} {"year":"2024","title":"A survey on recent advances in named entity recognition","authors":["I Keraghel, S Morbieu, M Nadif - arXiv preprint arXiv:2401.10825, 2024"],"snippet":"Named Entity Recognition seeks to extract substrings within a text that name real-world objects and to determine their type (for example, whether they refer to persons or organizations). In this survey, we first present an overview of recent popular …","url":["https://arxiv.org/pdf/2401.10825"]} {"year":"2024","title":"A systematic review of multidimensional relevance estimation in information retrieval","authors":["G Peikos, G Pasi - Wiley Interdisciplinary Reviews: Data Mining and …"],"snippet":"In information retrieval, relevance is perceived as a multidimensional and dynamic concept influenced by user, task, and domain factors. Relying on this perspective, researchers have introduced multidimensional relevance models addressing …","url":["https://wires.onlinelibrary.wiley.com/doi/abs/10.1002/widm.1541"]} {"year":"2024","title":"A Systematic Study of the Role of Data Quality and Alignment for Fine-tuning LLMs for Enhanced Autoformalization","authors":["K Chawla, A Sahai, M DePavia, B Miranda - The Second Tiny Papers Track at ICLR 2024"],"snippet":"This study explores the role of data quality, particularly alignment, in fine-tuning Large Language Models (LLMs) for the task of autoformalization. Contrary to the conventional emphasis on dataset size, our research highlights the importance of …","url":["https://openreview.net/pdf?id=9LkPbnRwXu"]} {"year":"2024","title":"A Trustworthy Automated Short-Answer Scoring System Using a New Dataset and Hybrid Transfer Learning Method","authors":["M Maslim, HC Wang, CD Putra, YD Prabowo - 2024"],"snippet":"To measure the quality of student learning, teachers must conduct evaluations. One of the most efficient modes of evaluation is the short answer question. However, there can be inconsistencies in teacher-performed manual evaluations due to an …","url":["https://reunir.unir.net/bitstream/handle/123456789/16203/A%20Trustworthy%20Automated%20Short-Answer%20Scoring%20System%20Using%20a%20New%20Dataset%20and%20Hybrid%20Transfer%20Learning%20Method.pdf?sequence=1"]} {"year":"2024","title":"AboutMe: Using Self-Descriptions in Webpages to Document the Effects of English Pretraining Data Filters","authors":["L Lucy, S Gururangan, L Soldaini, E Strubell… - arXiv preprint arXiv …, 2024"],"snippet":"… From this Common Crawl data, we identify website hostnames that include an ABOUT page, which we define as URL paths containing about, aboutme, about-us, or bio (Appendix A). We then pair each ABOUT page with a random page on the …","url":["https://arxiv.org/pdf/2401.06408"]} {"year":"2024","title":"Abusive Language Detection in Khasi Social Media Comments","authors":["A Baruah, L Wahlang, F Jyrwa, F Shadap, F Barbhuiya… - ACM Transactions on Asian …, 2024"],"snippet":"This paper describes the work performed for automated abusive language detection in the Khasi language, a low-resource language spoken primarily in the state of Meghalaya, India. A dataset named Khasi Abusive Language Dataset (KALD) was …","url":["https://dl.acm.org/doi/pdf/10.1145/3664285"]} {"year":"2024","title":"Accelerating Spiking Neural Networks with Parallelizable Leaky Integrate-and-Fire Neurons","authors":["SYA Yarga, SUN Wood - 2024"],"snippet":"… To achieve this, these models undergo training on massive datasets, such as the Common Crawl corpus1, which contains petabytes of textual data. Training on such extensive datasets necessitates the use of fast machine learning models like the …","url":["https://www.techrxiv.org/doi/pdf/10.36227/techrxiv.170905886.62702188"]} {"year":"2024","title":"ACCEPTANCE OF GENERATIVE AI IN KNOWLEDGE WORK","authors":["K Koponen - 2023"],"snippet":"… The GPT-3 was trained with Corpus, a large and structured collection of texts or language data, called Common Crawl including nearly a trillion words. (Brown et al., 2020) In addition, the training set was also expanded with known high-quality …","url":["https://trepo.tuni.fi/bitstream/handle/10024/154287/KoponenKati.pdf?sequence=2"]} {"year":"2024","title":"ACCSAMS: Automatic Conversion of Exam Documents to Accessible Learning Material for Blind and Visually Impaired","authors":["D Wilkening, O Moured, T Schwarz, K Muller… - arXiv preprint arXiv …, 2024"],"snippet":"… We collected exam materials from the Common Crawl project4, a public archive of web data. We analyzed eight datasets, released between mid 2022 and the end of 2023, extracting PDFs, which include “exam” or “klausur” as a keyword in the URL …","url":["https://arxiv.org/pdf/2405.19124"]} {"year":"2024","title":"Accuracy of ChatGPT in Neurolocalization","authors":["WF Dabbas, YM Odeibat, M Alhazaimeh, MY Hiasat… - Cureus, 2024"],"snippet":"… The datasets used for pre-training included Common Crawl dataset, an expanded version of the WebText dataset, two internet-based books corpora (Books1 and Books2), and English-language Wikipedia [13]. …","url":["https://www.cureus.com/articles/249830-accuracy-of-chatgpt-in-neurolocalization.pdf"]} {"year":"2024","title":"Adapting LLMs to Downstream Applications","authors":["A Kucharavy - Large, 2024"],"snippet":"… The model was fed with text that generally precedes private information in a dataset used to train GPT-2—specifically the “Common Crawl” dataset to achieve a better recall. 2 With this approach, authors were not only able to extract extensive …","url":["https://link.springer.com/content/pdf/10.1007/978-3-031-54827-7.pdf#page=36"]} {"year":"2024","title":"Adaptive Ensembles of Fine-Tuned Transformers for LLM-Generated Text Detection","authors":["Z Lai, X Zhang, S Chen - arXiv preprint arXiv:2403.13335, 2024"],"snippet":"… 5) XLMRoberta: XLMRoberta is a large-scale multilingual transformer-based LM pretrained on a diverse set of one hundred languages using extensive CommonCrawl data [68]. We use the “xlm roberta base multi” as pretrained backbone weight. …","url":["https://arxiv.org/pdf/2403.13335"]} {"year":"2024","title":"Addressing Both Statistical and Causal Gender Fairness in NLP Models","authors":["H Chen, Y Ji, D Evans - arXiv preprint arXiv:2404.00463, 2024"],"snippet":"… 2019) comprising nearly 400,000 online biographies of 28 unique occupations scraped from the CommonCrawl. The task is to predict the occupation given in the biography with the occupation title removed. Each biography includes the name and …","url":["https://arxiv.org/pdf/2404.00463"]} {"year":"2024","title":"Addressing the Challenge of Online Health Misinformation: Detection, Retrieval, and Explainability","authors":["R Upadhyay - 2024"],"snippet":"Nell’odierna epoca digitale, le piattaforme online costituiscono uno dei mezzi principali utilizzati dalle persone per cercare informazioni relative alla propria salute. Nonostante il web sia un vasto repository di conoscenze in tale ambito, è affetto dal …","url":["https://boa.unimib.it/bitstream/10281/465160/2/phd_unimib_865291.pdf"]} {"year":"2024","title":"Adjusting Interpretable Dimensions in Embedding Space with Human Judgments","authors":["K Erk, M Apidianaki - arXiv preprint arXiv:2404.02619, 2024"],"snippet":"… The GLoVe embeddings used were trained on Common Crawl (42B tokens, 1.9M vocab, uncased, 300d vectors), downloaded from https: //nlp.stanford.edu/projects/glove/. In order to generate embeddings for contextualized instances of words in our …","url":["https://arxiv.org/html/2404.02619v1"]} {"year":"2024","title":"Adult Content Detection on Indonesian Tweets by Fine-tuning Transformer-based Models","authors":["AF Hidayatullah, RA Apong, DTC Lai, A Qazi - 2023 6th International Conference on …, 2023"],"snippet":"The prevalence of adult content on social media has harmful effects on the moral values of young individuals. Therefore, effectively filtering inappropriate content on social media like Twitter is essential. Researchers have utilized machine learning …","url":["https://ieeexplore.ieee.org/abstract/document/10367283/"]} {"year":"2024","title":"Advancing Fake News Detection: Hybrid Deep Learning with FastText and Explainable AI","authors":["E Hashmi, SY Yayilgan, MM Yamin, S Ali, M Abomhara - IEEE Access, 2024"],"snippet":"… FastText, a word representation tool developed by Facebook’s research division, provides both unsupervised and supervised modes, featuring an extensive lexicon of 2 million words sourced from Common Crawl. Each word is represented in a 300-dimensional …","url":["https://ieeexplore.ieee.org/iel7/6287639/6514899/10477989.pdf"]} {"year":"2024","title":"Advancing Internet Viewpoint Diversity: A Novel Algorithm and a Corpus Creation Tool","authors":["J Harwell - 2023"],"snippet":"… To achieve this, we first develop a big data processing architecture for creating indexed corpora from the Common Crawl web archives. The … Utilizing this tool, we processed approximately 1.2 billion web pages from the Common Crawl dataset …","url":["https://scholarship.claremont.edu/cgi/viewcontent.cgi?article=1768&context=cgu_etd"]} {"year":"2024","title":"Advancing language models through domain knowledge integration: a comprehensive approach to training, evaluation, and optimization of social scientific neural …","authors":["F Stöhr - Journal of Computational Social Science, 2024"],"snippet":"This article proposes a comprehensive strategy for training, evaluating, and optimizing domain-specific word2vec-based word embeddings, using social science literature as an example. Our primary objectives are:(1) to train the embeddings …","url":["https://link.springer.com/article/10.1007/s42001-024-00286-3"]} {"year":"2024","title":"Advancing Large Multi-modal Models with Explicit Chain-of-Reasoning and Visual Question Generation","authors":["K Uehara, N Goswami, H Wang, T Baba, K Tanaka… - arXiv preprint arXiv …, 2024"],"snippet":"The increasing demand for intelligent systems capable of interpreting and reasoning about visual content requires the development of Large Multi-Modal Models (LMMs) that are not only accurate but also have explicit reasoning capabilities. This paper …","url":["https://arxiv.org/pdf/2401.10005"]} {"year":"2024","title":"Advancing medical imaging with language models: featuring a spotlight on ChatGPT","authors":["M Hu, JY Qian, S Pan, Y Li, RLJ Qiu, X Yang - Physics in Medicine and Biology, 2024"],"snippet":"This review paper aims to serve as a comprehensive guide and instructional resource for researchers seeking to effectively implement language models in medical imaging research. First, we presented the fundamental principles and …","url":["https://iopscience.iop.org/article/10.1088/1361-6560/ad387d/pdf"]} {"year":"2024","title":"Advancing Real-time Pandemic Forecasting Using Large Language Models: A COVID-19 Case Study","authors":["H Du, J Zhao, Y Zhao, S Xu, X Lin, Y Chen, LM Gardner - arXiv preprint arXiv …, 2024","HF Yang, H Du, J Zhao, Y Zhao, S Xu, X Lin, Y Chen… - 2024"],"snippet":"Forecasting the short-term spread of an ongoing disease outbreak is a formidable challenge due to the complexity of contributing factors, some of which can be characterized through interlinked, multi-modality variables such as epidemiological …","url":["https://arxiv.org/pdf/2404.06962","https://www.researchsquare.com/article/rs-4244182/latest.pdf"]} {"year":"2024","title":"Afrikaans Literary Genre Recognition using Embeddings and Pre-Trained Multilingual Language Models","authors":["E Kotzé, BA Senekal - 2024 International Conference on Artificial Intelligence …, 2024"],"snippet":"… XLM-R (or XLM-RoBERTa) is a multilingual version of RoBERTa with 279M parameters and was trained on a 2.5TB of CommonCrawl data containing 100 languages. Afrikaans is included in all three of these multilingual models. …","url":["https://ieeexplore.ieee.org/abstract/document/10467838/"]} {"year":"2024","title":"Aggression and Misogyny in Hindi and Bangla: A Study of YouTube Comments","authors":["R Kumar, B Lahiri - Studies in Pragmatics, 2024"],"snippet":"… The pre-trained models are typically trained on web-scale data, ie text collected from all over the web such as the whole of Wikipedia and a common crawl of the websites currently on the internet amounting to billions (or trillions) of tokens and …","url":["https://brill.com/downloadpdf/display/title/69958.pdf#page=130"]} {"year":"2024","title":"AI unveiled personalities: Profiling optimistic and pessimistic attitudes in Hindi dataset using transformer‐based models","authors":["D Jain, A Kumar - Expert Systems"],"snippet":"Both optimism and pessimism are intricately intertwined with an individual's inherent personality traits and people of all personality types can exhibit a wide range of attitudes and behaviours, including levels of optimism and pessimism. This paper …","url":["https://onlinelibrary.wiley.com/doi/abs/10.1111/exsy.13572"]} {"year":"2024","title":"AI's Secret Weapon in Education. ChatGPT–The Future of Personalized Learning","authors":["A Popescu - Bulletin of the Transilvania University of Brasov. Series …, 2023"],"snippet":"This article delves into the fascinating intersection of AI and education, with a focus on ChatGPT, a conversational AI developed by OpenAI. Noted for its human-like responses, ChatGPT is positioned as a game-changer in personalized learning. The …","url":["https://webbut.unitbv.ro/index.php/Series_V/article/download/6816/5216"]} {"year":"2024","title":"AI-Powered Code Review Assistant for Streamlining Pull Request Merging","authors":["C Adapa, SS Avulamanda, ARK Anjana, A Victor - 2024 IEEE International …, 2024"],"snippet":"… The model's reliance on Refined Web, primarily sourced from Common Crawl, underscores its commitment to exceptional quality through advanced data augmentation techniques, notably largescale deduplication and rigorous filtering. Unlike conventional …","url":["https://ieeexplore.ieee.org/abstract/document/10503540/"]} {"year":"2024","title":"AI-Proof Your Assignments","authors":["S Vital - 2024"],"snippet":"ChatGPT and other AI tools are powerful, free, and seemingly everywhere. Faculty love it, fear it, and everything in between. Students also love it, fear it, and everything in between; All the while, they are using it. In this session, we’ll cover a bit about how …","url":["https://digitalcommons.stmarys-ca.edu/cgi/viewcontent.cgi?article=1159&context=staff-works"]} {"year":"2024","title":"AI-Replicas as Ethical Practice: Introducing an Alternative to Traditional Anonymization Techniques in Image-Based Research","authors":["T Kamelski, F Olivos - 2024"],"snippet":"This paper introduces the use of AI-replicas as an alternative to traditional anonymization methods in image-based qualitative research. It emphasizes the ethical and practical dilemmas posed by current anonymization methods, such as …","url":["https://osf.io/8frst/download"]} {"year":"2024","title":"Algorithmic amplification of biases on Google Search","authors":["H Habib, R Stoldt, A High, B Ekdale, A Peterson… - arXiv preprint arXiv …, 2024"],"snippet":"The evolution of information-seeking processes, driven by search engines like Google, has transformed the access to information people have. This paper investigates how individuals' preexisting attitudes influence the modern information-seeking …","url":["https://arxiv.org/pdf/2401.09044"]} {"year":"2024","title":"Alpaca against Vicuna: Using LLMs to Uncover Memorization of LLMs","authors":["AM Kassem, O Mahmoud, N Mireshghallah, H Kim… - arXiv preprint arXiv …, 2024"],"snippet":"In this paper, we introduce a black-box prompt optimization method that uses an attacker LLM agent to uncover higher levels of memorization in a victim agent, compared to what is revealed by prompting the target model with the training data …","url":["https://arxiv.org/pdf/2403.04801"]} {"year":"2024","title":"Amharic LLaMA and LLaVA: Multimodal LLMs for Low Resource Languages","authors":["M Andersland - arXiv preprint arXiv:2403.06354, 2024"],"snippet":"… Less than 0.1% of CommonCrawl2 is Amharic, and even when combining open source datasets without deduplication, we find that less than 500 million tokens of Amharic are available. In addition, the content of this data tends to be biased …","url":["https://arxiv.org/pdf/2403.06354"]} {"year":"2024","title":"An Ambiguous Technique for Nonvisual Text Entry","authors":["DC Gaines - 2023"],"snippet":"Text entry is a common daily task for many people, but it can be a challenge for people with visual impairments when using virtual touchscreen keyboards that lack physical key boundaries. In this thesis, we investigate using a small number of …","url":["https://digitalcommons.mtu.edu/cgi/viewcontent.cgi?article=2800&context=etdr"]} {"year":"2024","title":"An Analysis of Langauge Frequency and Error Correction for Esperanto","authors":["J Liang - arXiv preprint arXiv:2402.09696, 2024"],"snippet":"… Goyal builds Commoncrawl Corpus in 100 languages and then uses a language identification model to select Esperanto contexts[8], which … OSCAR [11] provides a collection of unannotated raw data and devises a method to improve the quality of …","url":["https://arxiv.org/pdf/2402.09696"]} {"year":"2024","title":"An Assessment on Comprehending Mental Health through Large Language Models","authors":["M Arcan, PD Niland, F Delahunty - arXiv preprint arXiv:2401.04592, 2024"],"snippet":"… The training data for GPT-3 is primarily sourced from a filtered version of Common Crawl, contributing to 60% of the weighted pre-training dataset, comprising 410 billion bytepair-encoded tokens. Other data sources include 19 billion tokens from …","url":["https://arxiv.org/pdf/2401.04592"]} {"year":"2024","title":"An Extensive Survey on Investigation Methodologies for Text Summarization","authors":["A Saklecha, P Uplavdiya, MPS Chawla - Indian Journal of Signal Processing (IJSP), 2023"],"snippet":"Natural language processing (NLP) is a fastexpanding field, and text summarization has recently gained a lot of research interest. The necessity for automatic summarizing approaches to effectively digest massive amounts of textual data has …","url":["https://www.ijsp.latticescipub.com/wp-content/uploads/papers/v3i4/D1016113423.pdf"]} {"year":"2024","title":"An Integrated Data Processing Framework for Pretraining Foundation Models","authors":["Y Sun, F Wang, Y Zhu, WX Zhao, J Mao - arXiv preprint arXiv:2402.16358, 2024"],"snippet":"… In the end-to-end evaluation, we train two GPT-2 models using CommonCrawl before and after processing, respectively. The model trained on the refined data demonstrates remarkable performance enhancement across downstream tasks of …","url":["https://arxiv.org/pdf/2402.16358"]} {"year":"2024","title":"An Introduction to Vision-Language Modeling","authors":["F Bordes, RY Pang, A Ajay, AC Li, A Bardes, S Petryk… - arXiv preprint arXiv …, 2024"],"snippet":"… Multiple curation steps are used to curate this dataset where English data is collected from common crawl and deduplicated followed by pre-processing HTML document where useful DOM nodes are identified and retained, then for each DOM …","url":["https://arxiv.org/pdf/2405.17247"]} {"year":"2024","title":"Analysing Slow Thinking Capabilities in Large Language Model Agent-Agent Dialogue","authors":["J Cornelje - 2024"],"snippet":"… LLMs can be given data from books, CommonCrawl, Reddit links and Wikipedia, where CommonCrawl consists of petabytes of web archives [113]. Having large datasets makes it more difficult to curate the data. Documentation of the data can …","url":["https://studenttheses.uu.nl/bitstream/handle/20.500.12932/46440/Master%20Thesis%20-%20Joel%20Cornelje.pdf?sequence=1"]} {"year":"2024","title":"Analysing The Impact of Sequence Composition on Language Model Pre-Training","authors":["Y Zhao, Y Qu, K Staniszewski, S Tworkowski, W Liu… - arXiv preprint arXiv …, 2024"],"snippet":"Most language model pre-training frameworks concatenate multiple documents into fixed-length sequences and use causal masking to compute the likelihood of each token given its context; this strategy is widely adopted due to its simplicity and …","url":["https://arxiv.org/pdf/2402.13991"]} {"year":"2024","title":"Analysing the landscape of Deep Fake Detection: A Survey","authors":["K Vyas, P Pareek, R Jayaswal, S Patil - … of Intelligent Systems and Applications in …, 2024"],"snippet":"… Grover utilizes a test set created by the free and open-source web crawler and archive Common Crawl [35]. As an example, a team of academics from Harvard and the MIT-IBM Watson laboratory developed the Giant language model test room, a …","url":["https://www.ijisae.org/index.php/IJISAE/article/download/4418/3078"]} {"year":"2024","title":"AnchorAL: Computationally Efficient Active Learning for Large and Imbalanced Datasets","authors":["P Lesci, A Vlachos - arXiv preprint arXiv:2404.05623, 2024"],"snippet":"Active learning for imbalanced classification tasks is challenging as the minority classes naturally occur rarely. Gathering a large pool of unlabelled data is thus essential to capture minority instances. Standard pool-based active learning is …","url":["https://arxiv.org/html/2404.05623v1"]} {"year":"2024","title":"and S. Abirami","authors":["RS Gulecha, SKNR Subramanian - Speech and Language Technologies for Low …"],"snippet":"In recent years, the Internet and social media have sparked a revolution in the way information is exchanged. The growth of social media and micro-blogging sites not only provides platforms for empowering freedom of expression and individual voices …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=Rs0DEQAAQBAJ&oi=fnd&pg=PA225&dq=commoncrawl&ots=l6SKt6PjDk&sig=xgAK9FCRDZ55-F9mtury-zU49yk"]} {"year":"2024","title":"Answer Agnostic Question Generation in Bangla Language","authors":["AR Fahad, N Al Nahian, MA Islam, RM Rahman - International Journal of Networked …, 2024"],"snippet":"… mT5 is a massively multilingual model pre-trained using fresh Common Crawl-based data that includes 101 languages [14]. Another model, BanglaT5, a Transformer model for the Bangla language, was trained using a 27.5 GB clean corpus of Bangla …","url":["https://link.springer.com/article/10.1007/s44227-023-00018-5"]} {"year":"2024","title":"AntiPhishStack: LSTM-based Stacked Generalization Model for Optimized Phishing URLs Detection","authors":["S Aslam, H Aslam, A Manzoor, C Hui, A Rasool - 2024"],"snippet":"The escalating reliance on revolutionary online web services has introduced heightened security risks, with persistent challenges posed by phishing despite extensive security measures. Traditional phishing systems, reliant on machine …","url":["https://www.preprints.org/manuscript/202401.1142/download/final_file"]} {"year":"2024","title":"AnyGPT: Unified Multimodal LLM with Discrete Sequence Modeling","authors":["J Zhan, J Dai, J Ye, Y Zhou, D Zhang, Z Liu, X Zhang… - arXiv preprint arXiv …, 2024"],"snippet":"We introduce AnyGPT, an any-to-any multimodal language model that utilizes discrete representations for the unified processing of various modalities, including speech, text, images, and music. AnyGPT can be trained stably without any …","url":["https://arxiv.org/pdf/2402.12226"]} {"year":"2024","title":"Application of an Improved Convolutional Neural Network Algorithm in Text Classification","authors":["J Peng, S Huo - Journal of Web Engineering, 2024"],"snippet":"This paper proposes a text classification model based on a combination of a convolutional neural network (CNN) and a support vector machine (SVM) using Amazon review polarity, TREC, and Kaggle as experimental data. By adding an …","url":["https://journals.riverpublishers.com/index.php/JWE/article/download/25003/19981"]} {"year":"2024","title":"Application of Open-source Large Language Model (LLM) for Simulation of a Vulnerable IoT System and Cybersecurity Best Practices Assistance","authors":["V Yosifova - 2024"],"snippet":"This paper explores the role of open-source large language models in IoT cybersecurity world. The threats of malicious activity on the Internet and the loss of private information are very real and lead to serious consequences. The purpose of …","url":["https://www.preprints.org/manuscript/202405.1169/download/final_file"]} {"year":"2024","title":"Applications of AI in Computer Vision and NLP","authors":["SB Rajasekaran - Journal of Artificial Intelligence & Cloud Computing …, 2023"],"snippet":"… In language AI, there are several commonly used datasets such as Common Crawl, Wikipedia, and OpenWebText. However, creating and maintaining these datasets can also be a challenging task, as it needs a big amount of work and it can …","url":["https://www.onlinescientificresearch.com/articles/applications-of-ai-in-computer-vision-and-nlp.pdf"]} {"year":"2024","title":"Applying German word vectors to assess flexibility performance in the associative fluency task.","authors":["M Camenzind, M Single, SM Gerber, T Nef, CL Bassetti… - Psychology of Aesthetics …, 2024"],"snippet":"… One model was trained on common crawl, a nonprofit organization crawling the web and making resulting data publicly available, and the other was trained on Wikipedia. The vectors were trained using the CBOW algorithm with a special focus …","url":["https://psycnet.apa.org/record/2024-67165-001"]} {"year":"2024","title":"Applying Named Entity Recognition and Graph Networks to Extract Common Interests from Thematic Subfora on Reddit","authors":["J Sawicki, M Ganzha, M Paprzycki, Y Watanobe - Applied Sciences, 2024"],"snippet":"… , accessed on 13 February 2024) (pre-trained BERT large models fine-tuned on the CoNLL-2003 dataset) and flair/ner-english-large (https://huggingface.co/flair/ner-english-large, accessed on 13 February 2024) (an XLM-RoBERTa [89] pre-trained on a cleaned …","url":["https://www.mdpi.com/2076-3417/14/5/1696"]} {"year":"2024","title":"Applying Transfer Learning to German Metaphor Prediction","authors":["M Berger, SM Reimann, NM Kiwitt - Proceedings of the 2024 Joint International …, 2024"],"snippet":"… -scores using CommonCrawl embeddings (F1 60%) and News Commentary embeddings (F1 58%) while CommonCrawl contains over 2 m… 4), we can see that the bilingual embeddings approach, especially using Europarl and CommonCrawl …","url":["https://aclanthology.org/2024.lrec-main.123.pdf"]} {"year":"2024","title":"Arabic sarcasm detection: An enhanced fine-tuned language model approach","authors":["MA Galal, AH Yousef, HH Zayed, W Medhat - Ain Shams Engineering Journal, 2024"],"snippet":"Sarcasm is a complex linguistic phenomenon involving humor, criticism, or phrases that convey the opposite meaning, mask true feelings, and play pivotal roles in various aspects of communication. Therefore, identifying sarcasm is essential for …","url":["https://www.sciencedirect.com/science/article/pii/S2090447924001114"]} {"year":"2024","title":"Arctic-Embed: Scalable, Efficient, and Accurate Text Embedding Models","authors":["L Merrick, D Xu, G Nuti, D Campos - arXiv preprint arXiv:2405.05374, 2024"],"snippet":"This report describes the training dataset creation and recipe behind the family of \\texttt{arctic-embed} text embedding models (a set of five models ranging from 22 to 334 million parameters with weights open-sourced under an Apache-2 license). At the time of …","url":["https://arxiv.org/pdf/2405.05374"]} {"year":"2024","title":"Are Human Conversations Special? A Large Language Model Perspective","authors":["T Jawale, C Animesh, S Vallath, K Talamadupula… - arXiv preprint arXiv …, 2024"],"snippet":"… We analyze the web data from CommonCrawl[21] dumps for human conversation data and the types of conversations in Table 1. We randomly sample a subset of the dump and deduplicate it so that it can be used to approximate the data distribution …","url":["https://arxiv.org/html/2403.05045v1"]} {"year":"2024","title":"Are LLMs Robust for Spoken Dialogues?","authors":["SM Mousavi, G Roccabruna, S Alghisi, M Rizzoli… - arXiv preprint arXiv …, 2024"],"snippet":"… The LLM finetuned for this task is T5 Small [21], (12 layers, 60M parameters), a transformer-based encoder-decoder model, pre-trained on the Common Crawl dataset with 750GB of web page text. The fine-tuning was performed following the …","url":["https://arxiv.org/pdf/2401.02297"]} {"year":"2024","title":"Artificial Intelligence for Anesthesiology Board-style Examination Questions: Role of Large Language Models","authors":["AA Khan, R Yunus, M Sohail, TA Rehman, S Saeed… - Journal of Cardiothoracic …, 2024"],"snippet":"Background : New artificial intelligence tools have been developed that have implications for medical usage. Large language models such as the widely used ChatGPT, developed by OpenAI, have not been explored in the context of …","url":["https://www.sciencedirect.com/science/article/pii/S1053077024000909"]} {"year":"2024","title":"Artificial Intelligence in the Media Economy: A Systematic Review of Use Cases, Application Potentials, and Challenges of Generative Language Models","authors":["T Prien, K Goldhammer - 2024"],"snippet":"Springer MRW: [AU:, IDX:] Page 1 Artificial Intelligence in the Media Economy: A Systematic Review of Use Cases, Application Potentials, and Challenges of Generative Language Models Tim Prien and Klaus Goldhammer Contents 1 …","url":["https://link.springer.com/content/pdf/10.1007/978-3-658-34048-3_89-1.pdf"]} {"year":"2024","title":"Artificial intelligence in the news: how AI retools, rationalizes, and reshapes journalism and the public arena","authors":["F Simon - 2024"],"snippet":"Despite growing interest, the effects of AI on the news industry and our information environment — the public arena — remain poorly understood. Insufficient attention has also been paid to the implications of the news industry’s dependence on …","url":["https://ora.ox.ac.uk/objects/uuid:aeb25013-1d17-40b2-b471-5bdca309db87/files/s4t64gp800"]} {"year":"2024","title":"Artificial Intelligence Supporting Independent Student Learning: An Evaluative Case Study of ChatGPT and Learning to Code","authors":["K Hartley, M Hayak, UH Ko - Education Sciences, 2024"],"snippet":"Artificial intelligence (AI) tools like ChatGPT demonstrate the potential to support personalized and adaptive learning experiences. This study explores how ChatGPT can facilitate self-regulated learning processes and learning computer programming …","url":["https://www.mdpi.com/2227-7102/14/2/120"]} {"year":"2024","title":"Artificial Intelligence, Large Language Models, and Design Thinking in TPC Classrooms","authors":["C Masters-Wheeler, J Bay, P Sullivan - Programmatic Perspectives, 2023"],"snippet":"… Common Crawl, for instance, is an open repository of web data that is accessible to anyone and can be used to train AI models. Large language models (LLMs) use the textual data from sources like Common Crawl and other freely accessible data …","url":["https://programmaticperspectives.cptsc.org/index.php/jpp/article/download/58/73"]} {"year":"2024","title":"Asking GPT for the Ordinary Meaning of Statutory Terms","authors":["C Engel, RH McAdams - MPI Collective Goods Discussion Paper, 2024"],"snippet":"We report on our test of the Large Language Model (LLM) ChatGPT (GPT) as a tool for generating evidence of the ordinary meaning of statutory terms. We explain why the most useful evidence for interpretation involves a distribution of replies rather …","url":["https://pure.mpg.de/rest/items/item_3566108/component/file_3566109/content"]} {"year":"2024","title":"Asking Questions Framework for Oral History Archives","authors":["J Švec, M Bulín, A Frémund, F Polák - European Conference on Information Retrieval, 2024"],"snippet":"… The resulting Wav2Vec 2.0 end-to-end model was combined with a lower-case four-gram language model estimated from CommonCrawl data. To decrease the model’s … The model was pre-trained on a self-supervised text-restoration task …","url":["https://link.springer.com/chapter/10.1007/978-3-031-56063-7_11"]} {"year":"2024","title":"Assessing and Optimizing Large Language Models on Spondyloarthritis Multi-Choice Question Answering: Protocol for Enhancement and Assessment","authors":["A Wang, Y Wu, X Ji, X Wang, J Hu, F Zhang, Z Zhang… - JMIR Research Protocols, 2024"],"snippet":"Background Spondyloarthritis (SpA), a chronic inflammatory disorder, predominantly impacts the sacroiliac joints and spine, significantly escalating the risk of disability. SpA’s complexity, as evidenced by its diverse clinical presentations and symptoms …","url":["https://www.researchprotocols.org/2024/1/e57001"]} {"year":"2024","title":"Assessing the research landscape and clinical utility of large language models: a scoping","authors":["YJ Park, A Pillai, J Deng, E Guo, M Gupta, M Paget… - 2024"],"snippet":"… Introduced in November 2022, ChatGPT was trained using a large corpora of unlabelled text, including CommonCrawl, WebText, and Wikipedia, as well as internet-based book corpora spanning multiple languages [3]. GPT, along with other …","url":["https://bmcmedinformdecismak.biomedcentral.com/counter/pdf/10.1186/s12911-024-02459-6.pdf"]} {"year":"2024","title":"Asymmetric Polysemous Reasoning for Image-Text Matching","authors":["H Zhang, M Yang - 2023 IEEE International Conference on Data Mining …, 2023"],"snippet":"Image-text matching has received growing interest since it bridges vision and language. The key challenge lies in how to learn correspondence between image and text. Upon observation, we find existing works suffer from two limitations. Firstly …","url":["https://ieeexplore.ieee.org/abstract/document/10411666/"]} {"year":"2024","title":"Asynchronous Local-SGD Training for Language Modeling","authors":["B Liu, R Chhaparia, A Douillard, S Kale, AA Rusu… - arXiv preprint arXiv …, 2024"],"snippet":"Local stochastic gradient descent (Local-SGD), also referred to as federated averaging, is an approach to distributed optimization where each device performs more than one SGD update per communication. This work presents an empirical …","url":["https://arxiv.org/pdf/2401.09135"]} {"year":"2024","title":"Atmospheric Limitations for High-frequency Ground-based Very Long Baseline Interferometry","authors":["DW Pesce, L Blackburn, R Chaves, SS Doeleman… - The Astrophysical Journal, 2024"],"snippet":"Very long baseline interferometry (VLBI) provides the highest-resolution images in astronomy. The sharpest resolution is nominally achieved at the highest frequencies, but as the observing frequency increases, so too does the atmospheric contribution …","url":["https://iopscience.iop.org/article/10.3847/1538-4357/ad3961"]} {"year":"2024","title":"AtomGPT: Atomistic Generative Pre-trained Transformer for Forward and Inverse Materials Design","authors":["K Choudhary - arXiv preprint arXiv:2405.03680, 2024"],"snippet":"Large language models (LLMs) such as generative pretrained transformers (GPTs) have shown potential for various commercial applications, but their applicability for materials design remains underexplored. In this article, we introduce AtomGPT, a …","url":["https://arxiv.org/pdf/2405.03680"]} {"year":"2024","title":"Auditing Large Language Models for Enhanced Text-Based Stereotype Detection and Probing-Based Bias Evaluation","authors":["Z Wu, S Bulathwela, M Perez-Ortiz, AS Koshiyama - arXiv preprint arXiv:2404.01768, 2024"],"snippet":"Recent advancements in Large Language Models (LLMs) have significantly increased their presence in human-facing Artificial Intelligence (AI) applications. However, LLMs could reproduce and even exacerbate stereotypical outputs from …","url":["https://arxiv.org/pdf/2404.01768"]} {"year":"2024","title":"Augmenting a Spanish clinical dataset for transformer-based linking of negations and their out-of-scope references","authors":["AJ Tamayo-Herrera, DA Burgos, A Gelbukh - Natural Language Processing"],"snippet":"A negated statement consists of three main components: the negation cue, the negation scope, and the negation reference. The negation cue is the indicator of negation, while the negation scope defines the extent of the negation. The negation …","url":["https://www.cambridge.org/core/services/aop-cambridge-core/content/view/AD9F18ADEE45FA58A4041E059E111C52/S2977042424000104a.pdf/augmenting_a_spanish_clinical_dataset_for_transformerbased_linking_of_negations_and_their_outofscope_references.pdf"]} {"year":"2024","title":"Augmenting Knowledge-Based Conversational Search Systems With Large Language Models","authors":["M Klettner"],"snippet":"Conversational interfaces are increasingly used by websites and apps, including office software as well as creativity tools. This trend reflects a shift towards natural language in human-computer interaction, which is also facilitated by utilizing Large …","url":["https://wwwmatthes.in.tum.de/file/kuo9al9qqe3h/Sebis-Public-Website/Student-Theses-Guided-Research/Current-Theses-Guided-Researches/Master-s-Thesis-Manuel-Klettner/Klettner_Manuel_MastersThesis.pdf"]} {"year":"2024","title":"AutoMathText: Autonomous Data Selection with Language Models for Mathematical Texts","authors":["Y Zhang, Y Luo, Y Yuan, ACC Yao - arXiv preprint arXiv:2402.07625, 2024"],"snippet":"To improve language models' proficiency in mathematical reasoning via continual pretraining, we introduce a novel strategy that leverages base language models for autonomous data selection. Departing from conventional supervised fine-tuning or …","url":["https://arxiv.org/pdf/2402.07625"]} {"year":"2024","title":"Automatic Coding of Contingency in Child-Caregiver Conversations","authors":["A Agrawal, M Nikolaus, B Favre, A Fourtassi - 2024"],"snippet":"One of the most important communicative skills children have to learn is to engage in meaningful conversations with people around them. At the heart of this learning lies the mastery of contingency, ie, the ability to contribute to an ongoing exchange …","url":["https://files.osf.io/v1/resources/hwnms/providers/osfstorage/65fc3931d09d170113eee760?action=download&direct&version=1"]} {"year":"2024","title":"Automatic Data Curation for Self-Supervised Learning: A Clustering-Based Approach","authors":["HV Vo, V Khalidov, T Darcet, T Moutakanni… - arXiv preprint arXiv …, 2024"],"snippet":"… By filtering Common Crawl data, the authors managed to obtain large datasets to train reliable … To this end, we apply our curation pipeline to two text pools based on Common Crawl. The first … Doing so keeps the original data distribution closer to …","url":["https://arxiv.org/pdf/2405.15613"]} {"year":"2024","title":"Automatic detection of hate speech in code-mixed Indian languages in twitter social media interaction using DConvBLSTM-MuRIL ensemble method","authors":["P Kakati, D Dandotiya - Social Network Analysis and Mining, 2024"],"snippet":"… During the pre-training process, 2.5 terabytes of Common Crawl data in 100 languages are used. This model gives a much better performance than multilingual BERT when code-switching, especially for languages with little resources, because …","url":["https://link.springer.com/article/10.1007/s13278-024-01264-3"]} {"year":"2024","title":"Automatic Extraction of User-Centric Aspects for Tourist Spot Recommender Systems Using Reviews in Japanese","authors":["F Uwano, R Kobayashi, M Ohta - International Conference on Human-Computer …, 2024"],"snippet":"In tourist reviews, various pieces of information are described to confirm the characteristics of tourist spots. This paper proposes a method to extract tourist spot aspects using Japanese tourist reviews automatically. The aspects of tourist spots …","url":["https://link.springer.com/chapter/10.1007/978-3-031-60125-5_17"]} {"year":"2024","title":"Automating Code Adaptation for MLOps--A Benchmarking Study on LLMs","authors":["H Patel, BA Ramanan, MA Khan, T Williams… - arXiv preprint arXiv …, 2024"],"snippet":"This paper explores the possibilities of the current generation of Large Language Models for incorporating Machine Learning Operations (MLOps) functionalities into ML training code bases. We evaluate the performance of OpenAI (gpt-3.5-turbo) and …","url":["https://arxiv.org/pdf/2405.06835"]} {"year":"2024","title":"Automating String Encoding in AutoML","authors":["CH Lam"],"snippet":"Automated Machine Learning (AutoML) systems often struggle with data sets with string data. Due to their unstructured nature and dependence on domain knowledge, string data are a frequent cause of errors and coming up with generalized ways of …","url":["https://pure.tue.nl/ws/portalfiles/portal/319777301/Lam_C.pdf"]} {"year":"2024","title":"Autonomous Data Selection with Language Models for Mathematical Texts","authors":["Y Zhang, Y Luo, Y Yuan, AC Yao - ICLR 2024 Workshop on Navigating and …, 2024"],"snippet":"To improve language models’ proficiency in mathematical reasoning via continual pretraining, we introduce a novel strategy that leverages base language models for autonomous data selection. Departing from conventional supervised fine-tuning or …","url":["https://openreview.net/pdf?id=bBF077z8LF"]} {"year":"2024","title":"BadRAG: Identifying Vulnerabilities in Retrieval Augmented Generation of Large Language Models","authors":["J Xue, M Zheng, Y Hu, F Liu, X Chen, Q Lou - arXiv preprint arXiv:2406.00083, 2024"],"snippet":"Large Language Models (LLMs) are constrained by outdated information and a tendency to generate incorrect data, commonly referred to as \"hallucinations.\" Retrieval-Augmented Generation (RAG) addresses these limitations by combining …","url":["https://arxiv.org/pdf/2406.00083"]} {"year":"2024","title":"Bailong: Bilingual Transfer Learning based on QLoRA and Zip-tie Embedding","authors":["LC Chen, ZR Li - arXiv preprint arXiv:2404.00862, 2024"],"snippet":"… Owing to computational constraints, we further conducted additional sampling of the OSCAR dataset and the Common Crawl dataset to mitigate the issue related to dataset size. In the long run, there are around 13 billion tokens left in the corpus. The …","url":["https://arxiv.org/html/2404.00862v1"]} {"year":"2024","title":"Balanced Data Sampling for Language Model Training with Clustering","authors":["Y Shao, L Li, Z Fei, H Yan, D Lin, X Qiu - arXiv preprint arXiv:2402.14526, 2024"],"snippet":"Data plays a fundamental role in the training of Large Language Models (LLMs). While attention has been paid to the collection and composition of datasets, determining the data sampling strategy in training remains an open question. Most …","url":["https://arxiv.org/pdf/2402.14526"]} {"year":"2024","title":"Balancing Multilingual Model Training Data Using Exponential Smoothing","authors":["D Kundu - 2023"],"snippet":"… The XLM-R [9] model is pre-trained over a larger common crawl corpus similarly to MBERT, using language sampling according to exponentially-smoothed language probabilities. Again, performance on low-resources tasks seems to benefit …","url":["https://ijsret.com/wp-content/uploads/2023/11/IJSRET_V9_issue6_417.pdf"]} {"year":"2024","title":"Bangla Emergency Post Classification on Social Media using Transformer Based BERT Models","authors":["AA Nabil, D Das, MS Salim, S Arifeen, HMA Fattah - 2023 6th International …, 2023"],"snippet":"… It is a large multi-lingual language model trained on 2.5TB of filtered Common Crawl data. XLMRoBERTa is a multilingual model trained in 100 different languages. Unlike some XLM multilingual models, it does not require lang tensors to understand …","url":["https://ieeexplore.ieee.org/abstract/document/10427900/"]} {"year":"2024","title":"Benchmarking PathCLIP for Pathology Image Analysis","authors":["S Zheng, X Cui, Y Sun, J Li, H Li, Y Zhang, P Chen… - arXiv preprint arXiv …, 2024"],"snippet":"… All these versions are trained using masked language modeling and next-word prediction techniques with training data sourced from publicly available repositories like Wikipedia, Common Crawl. The results of LLaMA demonstrate that cuttingedge …","url":["https://arxiv.org/pdf/2401.02651"]} {"year":"2024","title":"Benchmarking zero-shot stance detection with FlanT5-XXL: Insights from training data, prompting, and decoding strategies into its near-SoTA performance","authors":["R Aiyappa, S Senthilmani, J An, H Kwak, YY Ahn - arXiv preprint arXiv:2403.00236, 2024"],"snippet":"We investigate the performance of LLM-based zero-shot stance detection on tweets. Using FlanT5-XXL, an instruction-tuned open-source LLM, with the SemEval 2016 Tasks 6A, 6B, and P-Stance datasets, we study the performance and its variations …","url":["https://arxiv.org/pdf/2403.00236"]} {"year":"2024","title":"BengaliLCP: A Dataset for Lexical Complexity Prediction in the Bengali Texts","authors":["N Ayman, MA Hossain, A Aziz, RU Faruqui, AN Chy - Proceedings of the 2024 Joint …, 2024"],"snippet":"Encountering intricate or ambiguous terms within a sentence produces distress for the reader during comprehension. Lexical Complexity Prediction (LCP) deals with predicting the complexity score of a word or a phrase considering its context. This …","url":["https://aclanthology.org/2024.lrec-main.200.pdf"]} {"year":"2024","title":"BERT, RoBERTa or DeBERTa? Comparing Performance Across Transformer Models in Political Science Text","authors":["J Carreras Timoneda, S Vallejo Vera"],"snippet":"Transformer models such as BERT, RoBERTa, and DeBERTa have revolutionized the field of Natural Language Processing in recent years with substantial improvements in the contextual understanding of text. While political scientists have …","url":["https://www.journals.uchicago.edu/doi/pdf/10.1086/730737"]} {"year":"2024","title":"BERT, RoBERTa or DeBERTa? Comparing Performance Across Transformers Models in Political Science Text","authors":["JC Timoneda, SV Vera"],"snippet":"Transformer models such as BERT, RoBERTa, and DeBERTa have revolutionized the field of Natural Language Processing in recent years with substantial improvements in the contextual understanding of text. While political scientists have …","url":["https://svallejovera.github.io/files/bert_roberta_jop.pdf"]} {"year":"2024","title":"BERTicelli at HaSpeeDe 3: Fine-tuning and Cross-validating Large Language Models for Hate Speech Detection Leonardo Grotti¹², Patrick Quick¹ ¹Universiteit …","authors":["L Grotti¹ - EVALITA Proceedings of the Eighth Evaluation …, 2024"],"snippet":"The present paper describes the results from the experiments carried out for the HaSpeeDe 3 shared task, an Italian-language Hate Speech (HS) detection task, at EVALITA 2023. Two BERT-based language models were selected: UmBERTO (cased) …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=ZLfuEAAAQBAJ&oi=fnd&pg=PA188&dq=commoncrawl&ots=BBnDdqOBAj&sig=H6DFO9BmsqkOJQHbUMkZxiadwmA"]} {"year":"2024","title":"BERTs are Generative In-Context Learners","authors":["D Samuel - arXiv preprint arXiv:2406.04823, 2024"],"snippet":"This paper explores the in-context learning capabilities of masked language models, challenging the common view that this ability does not 'emerge' in them. We present an embarrassingly simple inference technique that enables DeBERTa to operate as …","url":["https://arxiv.org/pdf/2406.04823"]} {"year":"2024","title":"Beyond Lexical Boundaries: LLM-Generated Text Detection for Romanian Digital Libraries","authors":["M Nitu, M Dascalu - Future Internet, 2024"],"snippet":"Machine-generated content reshapes the landscape of digital information; hence, ensuring the authenticity of texts within digital libraries has become a paramount concern. This work introduces a corpus of approximately 60 k Romanian documents …","url":["https://www.mdpi.com/1999-5903/16/2/41"]} {"year":"2024","title":"Beyond Metrics: Evaluating LLMs' Effectiveness in Culturally Nuanced, Low-Resource Real-World Scenarios","authors":["M Ochieng, V Gumma, S Sitaram, J Wang, K Ronen… - arXiv preprint arXiv …, 2024"],"snippet":"The deployment of Large Language Models (LLMs) in real-world applications presents both opportunities and challenges, particularly in multilingual and code-mixed communication settings. This research evaluates the performance of seven leading …","url":["https://arxiv.org/pdf/2406.00343"]} {"year":"2024","title":"Beyond Scale: The Diversity Coefficient as a Data Quality Metric for Variability in Natural Language Data","authors":["A Lee, B Miranda, S Sundar, A Casasola, S Koyeyo"],"snippet":"… Since many datasets are publicly available (eg Common Crawl, Wikipedia), data used to train new models may be curated from such … We evaluate on Pile-CC (Pile Common Crawl) and OpenWebText2 since those datasets align with intuitively …","url":["https://openreview.net/pdf?id=tgkWxsOapD"]} {"year":"2024","title":"BharatBhasaNet-A unified framework to identify Indian code mix Languages","authors":["S Dey, S Thakur, A Kandwal, R Kumar, S Dasgupta… - IEEE Access, 2024"],"snippet":"In the rapidly globalizing digital communication sphere, the imperative for advanced multilingual text recognition and identification is increasingly evident. Contrasting the previous works, which were predominantly constrained to 2-3 languages, this …","url":["https://ieeexplore.ieee.org/iel7/6287639/6514899/10517602.pdf"]} {"year":"2024","title":"Bi-directional GRU-Based Approach for Multi-Class Text Error Identification System","authors":["S Reeha, BV Nikith, GMP Reddy, PB Pati - 2024 IEEE 9th International Conference …, 2024"],"snippet":"… The C4 dataset, utilized in this study, is an extensively cleaned and curated version of Common Crawl's web crawl corpus. Common Crawl is a publicly available dataset accessible at commoncrawl.org. and boasting over 700 million …","url":["https://ieeexplore.ieee.org/abstract/document/10543361/"]} {"year":"2024","title":"Big City Bias: Evaluating the Impact of Metropolitan Size on Computational Job Market Abilities of Language Models","authors":["C Campanella, R van der Goot - arXiv preprint arXiv:2403.08046, 2024"],"snippet":"Large language models (LLMs) have emerged as a useful technology for job matching, for both candidates and employers. Job matching is often based on a particular geographic location, such as a city or region. However, LLMs have known …","url":["https://arxiv.org/pdf/2403.08046"]} {"year":"2024","title":"Bioeconomy firms and where to find them","authors":["L Kriesch, S Losacker - REGION, 2024"],"snippet":"… This dataset uses the open-source web repository CommonCrawl to identify German company websites and has proven to be a valuable database for spatial research. From this data, we identify bioeconomy firms using a combination of …","url":["https://openjournals.wu-wien.ac.at/ojs/index.php/region/article/view/523/456"]} {"year":"2024","title":"BioMedLM: A 2.7 B Parameter Language Model Trained On Biomedical Text","authors":["E Bolton, A Venigalla, M Yasunaga, D Hall, B Xiong… - arXiv preprint arXiv …, 2024"],"snippet":"… The Pile contains multiple sub-corpora, including general content from Common Crawl and specialized content like PubMed, Github, and FreeLaw. This model serves as an important baseline for comparison with our domain-specific models; …","url":["https://arxiv.org/pdf/2403.18421"]} {"year":"2024","title":"BIS Papers","authors":["I Aldasoro, S Doerr, L Gambacorta, S Notra, T Oliviero… - 2024"],"snippet":"Generative artificial intelligence (gen AI) introduces novel opportunities to strengthen central banks’ cyber security but also presents new risks. We use data from a unique survey among cyber security experts at major central banks to shed …","url":["https://www.bis.org/publ/bppdf/bispap145.pdf"]} {"year":"2024","title":"Botlitica: A generative AI-based tool to assist journalists in navigating political propaganda campaigns","authors":["E Musi, EEG Aguilar, L Federico - Studies in Communication Sciences, 2024"],"snippet":"The hype on generative AI has raised concerns about the spread of disinformation but also opened up new opportunities for hybrid journalism. The proliferation of political propaganda campaigns spread across digital media during election periods …","url":["https://www.hope.uzh.ch/scoms/article/download/4270/4464"]} {"year":"2024","title":"Branch-Train-MiX: Mixing Expert LLMs into a Mixture-of-Experts LLM","authors":["S Sukhbaatar, O Golovneva, V Sharma, H Xu, XV Lin… - arXiv preprint arXiv …, 2024"],"snippet":"We investigate efficient methods for training Large Language Models (LLMs) to possess capabilities in multiple specialized domains, such as coding, math reasoning and world knowledge. Our method, named Branch-Train-MiX (BTX), starts …","url":["https://arxiv.org/pdf/2403.07816"]} {"year":"2024","title":"Breaking the Curse of Multilinguality with Cross-lingual Expert Language Models","authors":["T Blevins, T Limisiewicz, S Gururangan, M Li, H Gonen… - arXiv preprint arXiv …, 2024"],"snippet":"Despite their popularity in non-English NLP, multilingual language models often underperform monolingual ones due to inter-language competition for model parameters. We propose Cross-lingual Expert Language Models (X-ELM), which …","url":["https://arxiv.org/pdf/2401.10440"]} {"year":"2024","title":"Bridging large language model disparities: Skill tagging of multilingual educational content","authors":["Y Kwak, ZA Pardos - British Journal of Educational Technology, 2024"],"snippet":"The adoption of large language models (LLMs) in education holds much promise. However, like many technological innovations before them, adoption and access can often be inequitable from the outset, creating more divides than they bridge. In …","url":["https://bera-journals.onlinelibrary.wiley.com/doi/pdf/10.1111/bjet.13465"]} {"year":"2024","title":"Bridging the Bosphorus: Advancing Turkish Large Language Models through Strategies for Low-Resource Language Adaptation and Benchmarking","authors":["EC Acikgoz, M Erdogan, D Yuret - arXiv preprint arXiv:2405.04685, 2024"],"snippet":"… This dataset was generated by extracting content from 71 monthly snapshots of the internet via Common Crawl (CC). CulturaX contains version … However, the diversity of these datasets should be sufficient, including common crawl, book …","url":["https://arxiv.org/pdf/2405.04685"]} {"year":"2024","title":"Building a Large Japanese Web Corpus for Large Language Models","authors":["N Okazaki, K Hattori, H Shota, H Iida, M Ohi, K Fujii… - arXiv preprint arXiv …, 2024"],"snippet":"… The corpus developed in this study is made from 21 snapshots of Common Crawl (from CC-MAIN-2020-40 to CC-MAIN-2023-23), and the size of the corpus after cleaning is 173,350,375 pages and 312,093,428,689 characters. To confirm the …","url":["https://arxiv.org/pdf/2404.17733"]} {"year":"2024","title":"Building Question-Answer Data Using Web Register Identification","authors":["A Eskelinen, A Myntti, E Henriksson, S Pyysalo… - Proceedings of the 2024 …, 2024"],"snippet":"… (2015), is a corpus of Finnish collected between 2015 and 2016 from Common Crawl and by crawling .fi domains. As all of these resources are Common Crawl based, in the post-processing stage we make sure that duplicate QA pairs are …","url":["https://aclanthology.org/2024.lrec-main.234.pdf"]} {"year":"2024","title":"By Senior Judge Stephanie Domitrovich and Judge Herbert B. Dixon Jr.","authors":["CJJ Federal Judiciary - Judges Journal, 2024"],"snippet":"… For example, the datasets used to train ChatGPT 3 included text from the following sources: (1) Common Crawl, containing petabytes of data collected from the internet since 2008; (2) WebText2, text from webpages with highly ranked Reddit …","url":["https://heinonline.org/hol-cgi-bin/get_pdf.cgi?handle=hein.journals/judgej63§ion=3"]} {"year":"2024","title":"Bye-bye Bias: What to Consider When Training Generative AI Models on Subjective Marketing Metrics","authors":["C Schamp - 2024"],"snippet":"… However, these models are trained on vast data sets scraped from the Internet, such as Common Crawl or LAION, that contain little information on the types of perceptual measures marketing is often interested in, such as perceived brand …","url":["https://sciendo.com/pdf/10.2478/nimmir-2024-0007"]} {"year":"2024","title":"Cache Me If You Can: The Case For Retrieval Augmentation in Federated Learning","authors":["A Muhamed, P Thaker, MT Diab, V Smith - Privacy Regulation and Protection in Machine …"],"snippet":"We propose retrieval augmentation (RA) as an enhancement to federated learning (FL) that can improve privacy protection and ensure regulatory compliance. FL, primarily designed for data privacy preservation, faces challenges with conventional …","url":["https://openreview.net/pdf?id=MKd1SkDbbz"]} {"year":"2024","title":"CAM 2.0: End-to-End Open Domain Comparative Question Answering System","authors":["A Shallouf, H Herasimchyk, M Salnikov, RG Veliz…"],"snippet":"… After extracting objects and aspects, we look for their matches in the Common Crawl text. It is also important to mention, that for the final … (lemmatized) corpus containing 14.3 billion English sentences from the Common Crawl. To retrieve …","url":["https://www.inf.uni-hamburg.de/en/inst/ab/lt/publications/2024-nikishina-cam2.pdf"]} {"year":"2024","title":"Can ChatGPT learn Chinese or Swahili? Considering how large language models might act differently if trained in different languages.","authors":["N Savage"],"snippet":"… Thien Nguyen, a professor of computer science at the University of Oregon, looked at the CommonCrawl corpus, a selection of text scraped … Then, because only a small percentage of the CommonCrawl corpus is in Korean, the company …","url":["https://dl.acm.org/doi/full/10.1145/3640351"]} {"year":"2024","title":"Can ChatGPT Plan Your Retirement?: Generative AI andFinancial Advice","authors":["AW Lo, J Ross"],"snippet":"We identify some of the most pressing issues facing the adoption of large language models (LLMs) in practical settings, and propose a research agenda to reach the next technological inflection point in generative AI. We focus on three challenges …","url":["https://assets.pubpub.org/z7x033cz/Lo%20&%20Ross%20(2024)_Just%20Accepted-01717176710726.pdf"]} {"year":"2024","title":"Can cross-domain term extraction benefit from cross-lingual transfer and nested term labeling?","authors":["HTH Tran, M Martinc, A Repar, N Ljubešić, A Doucet… - Machine Learning, 2024"],"snippet":"Automatic term extraction (ATE) is a natural language processing task that eases the effort of manually identifying terms from domain-specific corpora by providing a list of candidate terms. In this paper, we treat ATE as a sequence-labeling task and …","url":["https://link.springer.com/article/10.1007/s10994-023-06506-7"]} {"year":"2024","title":"Cascaded transformer-based networks for wikipedia large-scale image-caption matching","authors":["N Messina, DA Coccomini, A Esuli, F Falchi - Multimedia Tools and Applications, 2024"],"snippet":"… It uses an XLM-RoBERTa masked language model pre-trained on the CommonCrawl dataset and fine-tuned on the image URL-caption match classification task. The CLS token in output from XLM-RoBERTa is attached to a feed-forward …","url":["https://link.springer.com/article/10.1007/s11042-023-17977-0"]} {"year":"2024","title":"Causality-driven multivariate stock movement forecasting","authors":["A Díaz Berenguer, Y Da, MN Bossa, MC Oveneke… - PloS one, 2024"],"snippet":"Our study aims to investigate the interdependence between international stock markets and sentiments from financial news in stock forecasting. We adopt the Temporal Fusion Transformers (TFT) to incorporate intra and inter-market …","url":["https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0302197"]} {"year":"2024","title":"CC-GPX: Extracting High-Quality Annotated Geospatial Data from Common Crawl","authors":["I Ilyankou, J Haworth, S Cavazzi - arXiv preprint arXiv:2405.11039, 2024"],"snippet":"The Common Crawl (CC) corpus is the largest open web crawl dataset containing 9.5+ petabytes of data captured since 2008. The dataset is instrumental in training large language models, and as such it has been studied for (un)desirable content …","url":["https://arxiv.org/pdf/2405.11039"]} {"year":"2024","title":"CCSUM: A large-scale and high-quality dataset for abstractive news summarization","authors":["X Jiang, M Dreyer - 2024"],"snippet":"… Accordingly, among 35 million CommonCrawl News articles, we identify pairs of articles about the same news story and use one article’s first sentence as the summary for the other article. To ensure high quality, we apply strict filters whose …","url":["https://www.amazon.science/publications/ccsum-a-large-scale-and-high-quality-dataset-for-abstractive-news-summarization"]} {"year":"2024","title":"CERM: Context-aware Literature-based Discovery via Sentiment Analysis","authors":["JC Young, U Akujuobi - arXiv preprint arXiv:2402.01724, 2024"],"snippet":"Driven by the abundance of biomedical publications, we introduce a sentiment analysis task to understand food-health relationship. Prior attempts to incorporate health into recipe recommendation and analysis systems have primarily focused on …","url":["https://arxiv.org/pdf/2402.01724"]} {"year":"2024","title":"Challenge design roadmap","authors":["HJE Balderas, I Guyon, A Howard, W Reade, S Treguer - AI Competitions and …, 2023"],"snippet":"Challenges can be seen as a type of game that motivates participants to solve serious tasks. As a result, competition organizers must develop effective game rules. However, these rules have multiple objectives beyond making the game enjoyable …","url":["https://hal.science/hal-04333280/document"]} {"year":"2024","title":"Chatbots in Airport Customer Service—Exploring Use Cases and Technology Acceptance","authors":["I Auer, S Schlögl, G Glowka - Future Internet, 2024"],"snippet":"… Those pre-trained transformer systems like BERT (Bidirectional Encoder Representations from Transformers) and GPT (Generative Pre-trained Transformer) have been trained on vast language corpora such as Wikipedia and Common Crawl …","url":["https://www.mdpi.com/1999-5903/16/5/175/pdf"]} {"year":"2024","title":"ChatGPT Alternative Solutions: Large Language Models Survey","authors":["H Alipour, N Pendar, K Roy - arXiv preprint arXiv:2403.14469, 2024"],"snippet":"In recent times, the grandeur of Large Language Models (LLMs) has not only shone in the realm of natural language processing but has also cast its brilliance across a vast array of applications. This remarkable display of LLM capabilities has ignited a …","url":["https://arxiv.org/pdf/2403.14469"]} {"year":"2024","title":"ChatGPT and GPT-4: utilities in the legal sector","authors":["FJ Dosal Gómez, J Nieto Galende - 2024"],"snippet":"Artificial intelligence systems such as ChatGPT, the OpenAI chatbot, based on the language model family GPT (generative pre-trained transformers), as well as other solutions built on this technology and fine-tuned for specific tasks, have generated …","url":["https://www.tecnologia-ciencia-educacion.com/index.php/TCE/article/download/19081/21543/46195"]} {"year":"2024","title":"ChatGPT and Language Translation: A Small Case Study Evaluating English–Mandarin Translation","authors":["C Woodrum - International Conference on Human-Computer …, 2024"],"snippet":"… has been informed speculation that it relies heavily on common crawl [13], a compendium of publicly available … common crawl. The \\(15^{th}\\) most common language, Indonesian, makes up less than 1% of the data set [14, 15]. It is important …","url":["https://link.springer.com/chapter/10.1007/978-3-031-60615-1_10"]} {"year":"2024","title":"ChatGPT as the Marketplace of Ideas: Should Truth-Seeking Be the Goal of AI Content Governance?","authors":["J Zhang - arXiv preprint arXiv:2405.18636, 2024"],"snippet":"As one of the most enduring metaphors within legal discourse, the marketplace of ideas has wielded considerable influence over the jurisprudential landscape for decades. A century after the inception of this theory, ChatGPT emerged as a …","url":["https://arxiv.org/pdf/2405.18636"]} {"year":"2024","title":"ChatGPT in Cyber Onslaught and Fortification: Past, Present, and Future","authors":["M Gunda, V Manda, P Naradasu, S Mekala… - 2024 IEEE 9th International …, 2024"],"snippet":"… GPT-1 was unveiled in 2018 and trained on a pair of data sets: over 11,000 books from Book Corpus and the enormous Common Crawl … With 1.5 billion features and a much more extensive and diversified training dataset comprising Common Crawl …","url":["https://ieeexplore.ieee.org/abstract/document/10543384/"]} {"year":"2024","title":"ChatGPT versus NASS clinical guidelines for degenerative spondylolisthesis: a comparative analysis","authors":["W Ahmed, M Saturno, R Rajjoub, AH Duey, B Zaidat… - European Spine Journal, 2024"],"snippet":"… Furthermore, the largest dataset used to train both ChatGPT 3.5 and 4.0 is CommonCrawl, a platform that does not include PubMed and consists predominantly of open access articles, some of which may be of less rigorous if not …","url":["https://link.springer.com/article/10.1007/s00586-024-08198-6"]} {"year":"2024","title":"ChatGPT vs Gemini vs LLaMA on Multilingual Sentiment Analysis","authors":["A Buscemi, D Proverbio - arXiv preprint arXiv:2402.01715, 2024"],"snippet":"… It relied on a dataset referred to as the Common Crawl [30], a publicly accessible repository that comprises billions of web pages, making it one of the most extensive text databases available. It is important to note that the choice of dataset significantly …","url":["https://arxiv.org/pdf/2402.01715"]} {"year":"2024","title":"ChatGPT's applications in marketing: a topic modeling approach","authors":["W Tafesse, A Wien - Marketing Intelligence & Planning, 2024"],"snippet":"Purpose ChatGPT is a versatile technology with practical use cases spanning many professional disciplines including marketing. Being a recent innovation, however, there is a lack of academic insight into its tangible applications in the marketing …","url":["https://www.emerald.com/insight/content/doi/10.1108/MIP-10-2023-0526/full/html"]} {"year":"2024","title":"ChatGPT-3.5 can create and justify short Afrikaans poems: Implications for practice","authors":["CJ van Staden - Suid-Afrikaanse Tydskrif vir Natuurwetenskap en …, 2024"],"snippet":"ChatGPT-3.5 can create poems but the implications thereof has not yet been sufficiently investigated. The purpose of this explorative case study was to determine to which extent ChatGPT-3.5 can create and justify Afrikaans poems. Because …","url":["https://journals.co.za/doi/abs/10.36303/SATNT.2024.43.1.973"]} {"year":"2024","title":"ChatQA: Building GPT-4 Level Conversational QA Models","authors":["Z Liu, W Ping, R Roy, P Xu, M Shoeybi, B Catanzaro - arXiv preprint arXiv …, 2024"],"snippet":"… We collect 7k documents (average ∼1k words per document) from common crawl, which cover a wide range of domains. Each document is used for generation of a single conversational QA sample, which leads to a total of 7k multi-turn QA dialogues …","url":["https://arxiv.org/pdf/2401.10225"]} {"year":"2024","title":"Cheap Learning: Maximising Performance of Language Models for Social Data Science Using Minimal Data","authors":["L Castro-Gonzalez, YL Chung, HR Kirk, J Francis… - arXiv preprint arXiv …, 2024"],"snippet":"The field of machine learning has recently made significant progress in reducing the requirements for labelled training data when building new models. These `cheaper' learning techniques hold significant potential for the social sciences, where …","url":["https://arxiv.org/pdf/2401.12295"]} {"year":"2024","title":"Chinese Tiny LLM: Pretraining a Chinese-Centric Large Language Model","authors":["X Du, Z Yu, S Gao, D Pan, Y Cheng, Z Ma, R Yuan… - arXiv preprint arXiv …, 2024"],"snippet":"In this study, we introduce CT-LLM, a 2B large language model (LLM) that illustrates a pivotal shift towards prioritizing the Chinese language in developing LLMs. Uniquely initiated from scratch, CT-LLM diverges from the conventional …","url":["https://arxiv.org/pdf/2404.04167"]} {"year":"2024","title":"CLAPNQ: Cohesive Long-form Answers from Passages in Natural Questions for RAG systems","authors":["S Rosenthal, A Sil, R Florian, S Roukos - arXiv preprint arXiv:2404.02103, 2024"],"snippet":"… However, they use sentencelevel matching (by encoding sentences for semantic similarity comparisons) to retrieve up to top 7 documents from Common Crawl while avoiding exact matches as the abstractive dataset. In the extractive version, the …","url":["https://arxiv.org/pdf/2404.02103"]} {"year":"2024","title":"CLASSLA-web: Comparable Web Corpora of South Slavic Languages Enriched with Linguistic and Genre Annotation","authors":["N Ljubešić, T Kuzman - arXiv preprint arXiv:2403.12721, 2024"],"snippet":"This paper presents a collection of highly comparable web corpora of Slovenian, Croatian, Bosnian, Montenegrin, Serbian, Macedonian, and Bulgarian, covering thereby the whole spectrum of official languages in the South Slavic language space …","url":["https://arxiv.org/pdf/2403.12721"]} {"year":"2024","title":"Cleaner Pretraining Corpus Curation with Neural Web Scraping","authors":["Z Xu, Z Liu, Y Yan, Z Liu, C Xiong, G Yu - arXiv preprint arXiv:2402.14652, 2024"],"snippet":"… The web-crawled datasets, such as Common Crawl, have been widely used for pretraining, facilitating the development of language … htmlparser serves as the text pre-extraction tool for CommonCrawl6 WET file (containing pre-extracted text), while …","url":["https://arxiv.org/html/2402.14652v1"]} {"year":"2024","title":"CLEARNESS: Coreference Resolution for Generating and Ranking Arguments Extracted from Debate Portals for Queries","authors":["J Weidmann, L Dumani, R Schenkel - 2023"],"snippet":"… Refining Arg-CTRL with data from Common Crawl leads to a higher quality of generated arguments compared to using user discussions from Reddit comments. In the mentioned studies, new arguments are generated through the use of knowledge …","url":["https://ceur-ws.org/Vol-3630/LWDA2023-paper15.pdf"]} {"year":"2024","title":"CLIP and the City: Addressing the Artificial Encoding of Cities in Multimodal Foundation Deep Learning Models","authors":["DN del Castillo, I Neri"],"snippet":"In this project, we propose and explore a computational pipeline to examine urban cultural landscapes through the lens of artificial intelligence, and for questioning modes of embedding culture in machine learning models. By employing machine …","url":["https://www.strand.rs/publishing/2023/OA2023_proceedings_p100.pdf"]} {"year":"2024","title":"Co-constructing AI Authoring Using Ethical Theories","authors":["B Jones - 88th Annual International Conference"],"snippet":"We must show ethical humility, historically contextualizing ethical standards, as we implement AI tools in courses. The pace of change requires agile approaches to defining standards for the ethical use of AI. We invited students to co-construct a …","url":["https://researchmap.jp/hiroyuki-london/published_papers/45908700/attachment_file.pdf#page=218"]} {"year":"2024","title":"CoastTerm: a Corpus for Multidisciplinary Term Extraction in Coastal Scientific Literature","authors":["J Delaunay, HTH Tran, CE González-Gallardo… - arXiv preprint arXiv …, 2024"],"snippet":"… – Multilingual pre-trained model: We opt for XLMR [9], a transformer-based model pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages. This multilingual version of RoBERTa, achieves benchmark performance in ATE for rich-resourced …","url":["https://arxiv.org/pdf/2406.09128"]} {"year":"2024","title":"Comparative Study on Synthetic and Natural Error Analysis with BART & MarianMT","authors":["R Rohit, SA Gandheesh, GS Sannala, PB Pati - 2024 IEEE 9th International …, 2024"],"snippet":"Text is essential for communication, information sharing, knowledge acquisition, and analysis. It shapes public opinion, supports education, and drives online content, making it crucial in various domains. While there are various language models …","url":["https://ieeexplore.ieee.org/abstract/document/10543923/"]} {"year":"2024","title":"Comparing Bad Apples to Good Oranges: Aligning Large Language Models via Joint Preference Optimization","authors":["H Bansal, A Suvarna, G Bhatt, N Peng, KW Chang… - arXiv preprint arXiv …, 2024"],"snippet":"A common technique for aligning large language models (LLMs) relies on acquiring human preferences by comparing multiple generations conditioned on a fixed context. This only leverages the pairwise comparisons when the generations are …","url":["https://arxiv.org/html/2404.00530v1"]} {"year":"2024","title":"Comparing Explanation Faithfulness between Multilingual and Monolingual Fine-tuned Language Models","authors":["Z Zhao, N Aletras - arXiv preprint arXiv:2403.12809, 2024"],"snippet":"In many real natural language processing application scenarios, practitioners not only aim to maximize predictive performance but also seek faithful explanations for the model predictions. Rationales and importance distribution given by feature …","url":["https://arxiv.org/pdf/2403.12809"]} {"year":"2024","title":"Comparing Foundations: Insights into the Construction of Financial Causal Knowledge Graphs with and without Ontology","authors":["Z Xu, R Ichise - 人工知能学会全国大会論文集 第 38 回 (2024), 2024"],"snippet":"Simplicity in information navigation, interpretation, and reasoning has positioned mono-relation knowledge graphs (KGs) as a focus of attention, particularly in targeted scenarios. However, their deficiency in hierarchical interactions between …","url":["https://www.jstage.jst.go.jp/article/pjsai/JSAI2024/0/JSAI2024_2Q5IS102/_pdf"]} {"year":"2024","title":"Comparing generative and retrieval-based chatbots in answering patient questions regarding age-related macular degeneration and diabetic retinopathy","authors":["KX Cheong, C Zhang, TE Tan, BJ Fenner, WM Wong… - British Journal of …, 2024"],"snippet":"… ChatGPT was trained on multiple extremely large datasets, of which Common Crawl (open repository of web crawl data) constituted the bulk of the datasets.30 31 Google Bard was trained on Infiniset (data comprising public forum dialogue …","url":["https://bjo.bmj.com/content/early/2024/05/15/bjo-2023-324533.abstract"]} {"year":"2024","title":"Compass: Large Multilingual Language Model for South-east Asia","authors":["S Maria - arXiv preprint arXiv:2404.09220, 2024"],"snippet":"… We categorized these sources into seven types: CommonCrawl, C4, Wikipedia, WebText, Academic, Books, and Code. We upsampled and … We expect that the introduction of biases may originate from CommonCrawl, as this substantial dataset …","url":["https://arxiv.org/pdf/2404.09220"]} {"year":"2024","title":"Compilation of a Synthetic Judeo-French Corpus","authors":["I Nikolova-Stoupak, G Lejeune, E Schaeffer-Lacroix - Proceedings of the 8th Joint …, 2024"],"snippet":"This is a short paper describing the process of derivation of synthetic Judeo-French text. Judeo-French is one of a number of rare languages used in speaking and writing by Jewish communities as confined to a particular temporal and …","url":["https://aclanthology.org/2024.latechclfl-1.5.pdf"]} {"year":"2024","title":"Compositional Text-to-Image Generation with Dense Blob Representations","authors":["W Nie, S Liu, M Mardani, C Liu, B Eckart, A Vahdat - arXiv preprint arXiv:2405.08246, 2024"],"snippet":"Existing text-to-image models struggle to follow complex text prompts, raising the need for extra grounding inputs for better controllability. In this work, we propose to decompose a scene into visual primitives - denoted as dense blob representations …","url":["https://arxiv.org/pdf/2405.08246"]} {"year":"2024","title":"Comprehensive analysis of natural language processing","authors":["RK Yadav, A Madaan - Global Journal of Engineering and Technology …, 2024"],"snippet":"… Huge datasets like the Penn Treebank and the Common Crawl drove the development of more sophisticated NLP models. Transfer learning techniques like BERT enabled finetuning of pre-trained models for specific tasks. TensorFlow and …","url":["http://gjeta.com/sites/default/files/GJETA-2024-0058.pdf"]} {"year":"2024","title":"COMPREHENSIVE STUDY OF CLINICAL ENTITY EXTRACTION AND CLASSIFICATION USING LARGE LANGUAGE MODELS","authors":["M Faedi, P Torroni, DA Galassi, DG Grundler…"],"snippet":"This project aims to evaluate various techniques for extraction and categorization of clinical terms in unstructured documents. The purpose of this study is to assist automated systems operating in the biomedical domain by highlighting relevant …","url":["https://amslaurea.unibo.it/30623/1/Tesi_MicheleFaedi.pdf"]} {"year":"2024","title":"Compressing Lengthy Context With UltraGist","authors":["P Zhang, Z Liu, S Xiao, N Shao, Q Ye, Z Dou - arXiv preprint arXiv:2405.16635, 2024"],"snippet":"Compressing lengthy context is a critical but technically challenging problem. In this paper, we propose a new method called UltraGist, which is distinguished for its high-quality compression of lengthy context due to the innovative design of the compression and …","url":["https://arxiv.org/pdf/2405.16635"]} {"year":"2024","title":"Compression Represents Intelligence Linearly","authors":["Y Huang, J Zhang, Z Shan, J He - arXiv preprint arXiv:2404.09937, 2024"],"snippet":"… Concretely, for assessing knowledge and commonsense, we have compiled texts from the latest Common Crawl dataset. To evaluate coding ability, we have sourced data from GitHub repositories mainly on the Python language since the downstream …","url":["https://arxiv.org/pdf/2404.09937"]} {"year":"2024","title":"Computational criminology: at-scale quantitative analysis of the evolution of cybercrime forums","authors":["J Hughes - 2024"],"snippet":"Cybercrime forums and marketplaces are used by members to share hacking techniques, general community-building discussions, and trade hacking tools. While there is a large corpus of literature studying these platforms, from a cross-forum …","url":["https://www.repository.cam.ac.uk/bitstreams/0b5dcda3-56e5-4f29-8d3e-e9aecb2b95dc/download"]} {"year":"2024","title":"Computerized diagnostic decision support systems–a comparative performance study of Isabel Pro vs. ChatGPT4","authors":["JM Bridges - Diagnosis, 2024"],"snippet":"… ChatGPT4 is trained on Common Crawl, a publicly available dataset and one of the most extensive text datasets. While not trained explicitly for medical diagnosis, ChatGPT4 accesses medical textbooks, medical websites, medical papers, and …","url":["https://www.degruyter.com/document/doi/10.1515/dx-2024-0033/html"]} {"year":"2024","title":"Comquest: an Adaptive Crawler for User Comments on the Web","authors":["Z Chen - 2024"],"snippet":"This thesis introduces Comquest, an adaptive framework designed for the large-scale collection and integration of user comments from the Web. User comments are featured on many websites and there is growing interest in mining and studying user …","url":["https://scholarshare.temple.edu/bitstream/handle/20.500.12613/10236/Chen_temple_0225E_15673.pdf?sequence=1&isAllowed=y"]} {"year":"2024","title":"Comquest: Large Scale User Comment Crawling and Integration","authors":["Z Chen, L He, A Mukherjee, E Dragut - 2024"],"snippet":"… To train the model, we collect an extensive dataset from the Common Crawl Project. We filter the webpages that show apparent code signals (eg, loading the commenting system libraries) of the supported commenting systems. …","url":["https://www.researchgate.net/profile/Zhijia-Chen-3/publication/379953271_Comquest_Large_Scale_User_Comment_Crawling_and_Integration/links/6622cba3f7d3fc28746df7e6/Comquest-Large-Scale-User-Comment-Crawling-and-Integration.pdf"]} {"year":"2024","title":"Considering the Role of Fairness in Copyright Fair Use","authors":["B Rosenblatt - Houston Law Review, 2023"],"snippet":"Moral intuitions regarding when it is \"fair\" to engage in unauthorized copying or derivation from a copyrighted work may often take into account factors that the fair use statute, 17 USC § 107, does not enumerate among fair use considerations. This …","url":["https://houstonlawreview.org/article/92123-considering-the-role-of-fairness-in-copyright-fair-use"]} {"year":"2024","title":"Construction of Text Summarization Corpus in Economics Domain and Baseline Models","authors":["S Jumpathong, A Takhom, P Boonkwan… - 2024"],"snippet":"… It was pre-trained with a data collection of 101 languages from Common Crawl Due to time and resource constraints, we chose mT5-small… It was pre-trained with a data collection of 101 languages from Common Crawl. Due to time and resource …","url":["https://www.jicce.org/journal/view.html?uid=1254&vmd=Full"]} {"year":"2024","title":"Context-aware Transliteration of Romanized South Asian Languages","authors":["C Kirov, C Johny, A Katanova, A Gutkin, B Roark - Computational Linguistics, 2024"],"snippet":"While most transliteration research is focused on single tokens such as named entities – eg, transliteration of “અમદાવાદ” from the Gujarati script to the Latin script “Ahmedabad” – the informal romanization prevalent in South Asia and elsewhere …","url":["https://direct.mit.edu/coli/article-pdf/doi/10.1162/coli_a_00510/2213805/coli_a_00510.pdf"]} {"year":"2024","title":"Contextual Chart Generation for Cyber Deception","authors":["DD Nguyen, D Liebowitz, S Nepal, SS Kanhere… - arXiv preprint arXiv …, 2024"],"snippet":"… (1) HPN-T5 is based on Text-to-Text Transfer Transformer (T5) [42], a highly scalable sequence-to-sequence Transformer pre-trained on a large corpus harvested from Common Crawl. T5 introduced a novel pre-training objective that …","url":["https://arxiv.org/pdf/2404.04854"]} {"year":"2024","title":"Contextual Word Embedding for Biomedical Knowledge Extraction: a Rapid Review and Case Study","authors":["D Vithanage, P Yu, L Wang, C Deng - Journal of Healthcare Informatics Research, 2024"],"snippet":"Recent advancements in natural language processing (NLP), particularly contextual word embedding models, have improved knowledge extraction from biomedical and healthcare texts. However, limited comprehensive research compares these models …","url":["https://link.springer.com/article/10.1007/s41666-023-00157-y"]} {"year":"2024","title":"Contextually Enriched Meta-Learning Ensemble Model for Urdu Sentiment Analysis Symmetry 2023, 15, 645","authors":["K Ahmed, MI Nadeem, D Li, Z Zheng, N Al-Kahtani… - 2023"],"snippet":"… On the other hand, this model makes use of pre-trained word vectors that were trained on “common crawl” and “Wikipedia” through the use of the fastText model [82]. The CBOW algorithm with position-weighting is used to train this word vector. After …","url":["https://www.academia.edu/download/101809833/symmetry-15-00645-v2.pdf"]} {"year":"2024","title":"Continual Pre-Training for Cross-Lingual LLM Adaptation: Enhancing Japanese Language Capabilities","authors":["K Fujii, T Nakamura, M Loem, H Iida, M Ohi, K Hattori… - arXiv preprint arXiv …, 2024"],"snippet":"Cross-lingual continual pre-training of large language models (LLMs) initially trained on English corpus allows us to leverage the vast amount of English language resources and reduce the pre-training cost. In this study, we constructed Swallow, an …","url":["https://arxiv.org/pdf/2404.17790"]} {"year":"2024","title":"Controllable Sentence Simplification in Dutch","authors":["T Seidl, V Vandeghinste - Computational Linguistics in the Netherlands Journal, 2024"],"snippet":"Text simplification aims to reduce complexity in vocabulary and syntax, enhancing the readability and comprehension of text. This paper presents a supervised sentence simplification approach for Dutch using a pre-trained large language …","url":["https://clinjournal.org/clinj/article/download/171/184"]} {"year":"2024","title":"Controllable Sentence Simplification in Swedish Using Control Prefixes and Mined Paraphrases","authors":["J Monsen, A Jönsson - Proceedings of the 2024 Joint International Conference …, 2024"],"snippet":"… 2020), an opensource repository with tools to download and clean Common Crawl snapshots. Each file in the CC-100 corpus contains … The Common Crawl snapshots contain a large portion of low-quality data, including offensive and …","url":["https://aclanthology.org/2024.lrec-main.349.pdf"]} {"year":"2024","title":"Copilot for Microsoft 365: A Comprehensive End-user Training Plan for Organizations","authors":["M Kytö - 2024"],"snippet":"This thesis presents a comprehensive end user training plan for Copilot for Microsoft 365, a generative AI tool that integrates with Microsoft’s productivity suite of applications. The research is conducted within the context of Sulava Ltd., a Finnish …","url":["https://www.theseus.fi/bitstream/handle/10024/852578/Kyto_Miska.pdf?sequence=2"]} {"year":"2024","title":"Copyleft for Alleviating AIGC Copyright Dilemma: What-if Analysis, Public Perception and Implications","authors":["X Guo, Y Li, Y Peng, X Wei - arXiv preprint arXiv:2402.12216, 2024"],"snippet":"As AIGC has impacted our society profoundly in the past years, ethical issues have received tremendous attention. The most urgent one is the AIGC copyright dilemma, which can immensely stifle the development of AIGC and greatly cost the entire …","url":["https://arxiv.org/pdf/2402.12216"]} {"year":"2024","title":"CoRePooL—Corpus for Resource‐Poor Languages: Badaga Speech Corpus","authors":["HB Barathi Ganesh, G Jyothish Lal, R Jairam… - … Speech Recognition and …, 2024"],"snippet":"This chapter presents a corpus named CoRePooL that stands for Corpus for Resource‐Poor Languages. As voice‐specific human‐machine interaction applications are accelerated by deep learning algorithms, the lack of resources …","url":["https://onlinelibrary.wiley.com/doi/abs/10.1002/9781394214624.ch10"]} {"year":"2024","title":"Corpus Considerations for Annotator Modeling and Scaling","authors":["OO Sarumi, B Neuendorf, J Plepi, L Flek, J Schlötterer… - arXiv preprint arXiv …, 2024"],"snippet":"Recent trends in natural language processing research and annotation tasks affirm a paradigm shift from the traditional reliance on a single ground truth to a focus on individual perspectives, particularly in subjective tasks. In scenarios where …","url":["https://arxiv.org/html/2404.02340v1"]} {"year":"2024","title":"CorpusNÓS: A massive Galician corpus for training large language models","authors":["I de-Dios-Flores, SP Suárez, CC Pérez, DB Outeiriño… - Proceedings of the 16th …, 2024"],"snippet":"CorpusNÓS is a massive Galician corpus made up of 2.1 B words primarily devised for training large language models. The corpus sources are varied and represent a relatively wide range of genres. CorpusNÓS is, to the best of our knowledge, the …","url":["https://aclanthology.org/2024.propor-1.66.pdf"]} {"year":"2024","title":"Counter (media) Intelligence and Visioning: An Interview with Adam Harvey","authors":["A Harvey, PB Smith"],"snippet":"… Among the largest and most popular datasets used in AI research projects are Common Crawl (for NLP), and ImageNet and COCO (for computer vision), which are all derived from user-generated content. In my research project Exposing.ai, I’ve …","url":["https://salford-repository.worktribe.com/preview/2389095/Counter%28media%29%20Visioning%20and%20AI%20Harvey%20and%20Smith.pdf"]} {"year":"2024","title":"Creating Ad Campaigns Using Generative AI Check for updates","authors":["A Bulut, B Arslan - Applications of Generative AI"],"snippet":"Search campaigns consist of ad groups. An ad group contains a related set of keywords and ads. During an online campaign, search advertisers experiment with different marketing messages such as subtle vs. strong being used in ad copies, with …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=61_8EAAAQBAJ&oi=fnd&pg=PA23&dq=commoncrawl&ots=nk-w5Z-zal&sig=6KV6RzRkuWAeVreoPJlrVlxbFNU"]} {"year":"2024","title":"Creating Ad Campaigns Using Generative AI","authors":["A Bulut, B Arslan - Applications of Generative AI, 2024"],"snippet":"… In particular, T5 and PEGASUS models were trained on 750 GB of English-language text from the public Common Crawl web scrape while the BART model was trained on the CNN/Daily Mail dataset, which contains roughly 300K unique news articles …","url":["https://link.springer.com/chapter/10.1007/978-3-031-46238-2_2"]} {"year":"2024","title":"Creating Parallel Corpora for Ukrainian: a German-Ukrainian Parallel Corpus (ParaRook|| DE-UK)","authors":["M Shvedova, A Lukashevskyi - Proceedings of the Third Ukrainian Natural Language …, 2024"],"snippet":"Parallel corpora are currently a popular and vibrantly developing category of linguistic resources, used both in literature and translation studies, as well as in the field of NLP. For Ukrainian, though, there are still not enough significant parallel …","url":["https://aclanthology.org/2024.unlp-1.3.pdf"]} {"year":"2024","title":"CroissantLLM: A Truly Bilingual French-English Language Model","authors":["M Faysse, P Fernandes, N Guerreiro, A Loison, D Alves… - arXiv preprint arXiv …, 2024"],"snippet":"We introduce CroissantLLM, a 1.3B language model pretrained on a set of 3T English and French tokens, to bring to the research and industrial community a high-performance, fully open-sourced bilingual model that runs swiftly on consumer-grade local …","url":["https://arxiv.org/pdf/2402.00786"]} {"year":"2024","title":"Cross-cultural Inspiration Detection and Analysis in Real and LLM-generated Social Media Data","authors":["O Ignat, GG Lakshmy, R Mihalcea - arXiv preprint arXiv:2404.12933, 2024"],"snippet":"Inspiration is linked to various positive outcomes, such as increased creativity, productivity, and happiness. Although inspiration has great potential, there has been limited effort toward identifying content that is inspiring, as opposed to just engaging …","url":["https://arxiv.org/pdf/2404.12933"]} {"year":"2024","title":"Cross-Language Harmonization of Linguistic Resources","authors":["D Zeman"],"snippet":"The presented work consists of two parts. In the first part I summarize the main directions of my research since the defense of my PhD thesis in 2005. I start with cross-language transfer of parsing models to languages that have little or no …","url":["https://chres.is.cuni.cz/media/documents/2024/02/25/thesis-without-papers.pdf"]} {"year":"2024","title":"Cross-Lingual Cross-Modal Retrieval with Noise-Robust Fine-Tuning","authors":["R Cai, J Dong, T Liang, Y Liang, Y Wang, X Yang… - IEEE Transactions on …, 2024"],"snippet":"… from snapshots of the CommonCrawl public dataset2. Experiments in this paper concern parallel data between English and other four languages, namely Chinese (ZH), French (FR), German (DE) and Czech (CZ). Evaluation metrics. … https://commoncrawl.org/ …","url":["https://www.computer.org/csdl/journal/tk/5555/01/10530137/1WUyX1bSatO"]} {"year":"2024","title":"Cross-lingual Editing in Multilingual Language Models","authors":["H Beniwal, M Singh - arXiv preprint arXiv:2401.10521, 2024"],"snippet":"The training of large language models (LLMs) necessitates substantial data and computational resources, and updating outdated LLMs entails significant efforts and resources. While numerous model editing techniques (METs) have emerged to …","url":["https://arxiv.org/pdf/2401.10521"]} {"year":"2024","title":"Cyber Risks of Machine Translation Critical Errors: Arabic Mental Health Tweets as a Case Study","authors":["H Saadany, A Tantawy, C Orasan - arXiv preprint arXiv:2405.11668, 2024"],"snippet":"With the advent of Neural Machine Translation (NMT) systems, the MT output has reached unprecedented accuracy levels which resulted in the ubiquity of MT tools on almost all online platforms with multilingual content. However, NMT systems, like …","url":["https://arxiv.org/pdf/2405.11668"]} {"year":"2024","title":"Data Augmentation and Large Language Model for Legal Case Retrieval and Entailment","authors":["MQ Bui, DT Do, NK Le, DH Nguyen, KVH Nguyen… - The Review of Socionetwork …, 2024"],"snippet":"The Competition on Legal Information Extraction and Entailment (COLIEE) is a well-known international competition organized each year with the goal of applying machine learning algorithms and techniques in the analysis and understanding of legal …","url":["https://link.springer.com/article/10.1007/s12626-024-00158-2"]} {"year":"2024","title":"Data augmentation for language generation inspired by machine translation","authors":["P Chen - 2024"],"snippet":"The field of natural language processing has witnessed a surge in the adoption of deep learning, which faces notable hurdles when the training data is scarce. This thesis aims to study automatic data augmentation for language generation tasks …","url":["https://era.ed.ac.uk/bitstream/handle/1842/41873/Chen2024.pdf?sequence=1&isAllowed=y"]} {"year":"2024","title":"Data Augmentation with Semantic Enrichment for Deep Learning Invoice Text Classification","authors":["WW Chi, TY Tang, NM Salleh, M Mukred, H AlSalman… - IEEE Access, 2024"],"snippet":"Natural language processing (NLP) is a research field that provides huge potential to automate accounting tasks dealing with text data. This research studies the application of NLP in automatically categorizing invoices based on the invoice text …","url":["https://ieeexplore.ieee.org/iel7/6287639/6514899/10496671.pdf"]} {"year":"2024","title":"Data Authenticity, Consent, & Provenance for AI are all broken: what will it take to fix them?","authors":["S Longpre, R Mahari, N Obeng-Marnu, W Brannon… - arXiv preprint arXiv …, 2024"],"snippet":"… Common Crawl is accurate with wide coverage, but not detailed. Hugging Face can be inaccurate with varying levels of detail, but it has extensive coverage. The Data Provenance Initiative is highly accurate and detailed but is currently limited in scope. …","url":["https://arxiv.org/pdf/2404.12691"]} {"year":"2024","title":"Data Collection Pipeline for Low-Resource Languages: A Case Study on Constructing a Tetun Text Corpus","authors":["G de Jesus, SS Nunes - Proceedings of the 2024 Joint International Conference …, 2024"],"snippet":"This paper proposes Labadain Crawler, a data collection pipeline tailored to automate and optimize the process of constructing textual corpora from the web, with a specific target to low-resource languages. The system is built on top of Nutch, an …","url":["https://aclanthology.org/2024.lrec-main.390.pdf"]} {"year":"2024","title":"Data Engineering for Scaling Language Models to 128K Context","authors":["Y Fu, R Panda, X Niu, X Yue, H Hajishirzi, Y Kim… - arXiv preprint arXiv …, 2024"],"snippet":"We study the continual pretraining recipe for scaling language models' context lengths to 128K, with a focus on data engineering. We hypothesize that long context modeling, in particular \\textit{the ability to utilize information at arbitrary input …","url":["https://arxiv.org/pdf/2402.10171"]} {"year":"2024","title":"Data Mixing Made Efficient: A Bivariate Scaling Law for Language Model Pretraining","authors":["C Ge, Z Ma, D Chen, Y Li, B Ding - arXiv preprint arXiv:2405.14908, 2024"],"snippet":"Large language models exhibit exceptional generalization capabilities, primarily attributed to the utilization of diversely sourced data. However, conventional practices in integrating this diverse data heavily rely on heuristic schemes, lacking …","url":["https://arxiv.org/pdf/2405.14908"]} {"year":"2024","title":"Data-Centric Methods for Decentralizing Large Language Models","authors":["S Gururangan - 2024"],"snippet":"… from large web crawl services like Common Crawl. Web crawl is perceived to be wholly representative of activity on the Internet [Brügger… Indeed, web dumps like Common Crawl offer the promise of more diverse text than what is available in …","url":["https://digital.lib.washington.edu/researchworks/bitstream/handle/1773/51332/Gururangan_washington_0250E_26513.pdf?sequence=1&isAllowed=y"]} {"year":"2024","title":"DataComp Challenge","authors":["D Brunner - 2023"],"snippet":"… billion sample image-text dataset filtered from a comparable pool of samples to Common Crawl. For the creation of the LAION-2B dataset, first language filtering is applied to the original Common Crawl pool so that only samples with English texts …","url":["https://pub.tik.ee.ethz.ch/students/2023-HS/GA-2023-09.pdf"]} {"year":"2024","title":"Dataset Decomposition: Faster LLM Training with Variable Sequence Length Curriculum","authors":["H Pouransari, CL Li, JHR Chang, PKA Vasu, C Koc… - arXiv preprint arXiv …, 2024"],"snippet":"… Although the benefits of ICLM with large-scale Common Crawl data (used in our experiments) are marginal in regular evaluation, it significantly boosts long-context evaluation metrics from 27.5 to 28.7 by constructing long data sequences through …","url":["https://arxiv.org/pdf/2405.13226"]} {"year":"2024","title":"Dataset Growth","authors":["Z Qin, Z Xu, Y Zhou, Z Zheng, Z Cheng, H Tang… - arXiv preprint arXiv …, 2024"],"snippet":"Deep learning benefits from the growing abundance of available data. Meanwhile, efficiently dealing with the growing data scale has become a challenge. Data publicly available are from different sources with various qualities, and it is …","url":["https://arxiv.org/pdf/2405.18347"]} {"year":"2024","title":"Datasets for Large Language Models: A Comprehensive Survey","authors":["Y Liu, J Cao, C Liu, K Ding, L Jin - arXiv preprint arXiv:2402.18041, 2024"],"snippet":"… The first method involves building upon Common Crawl1. Common Crawl is a massive, unstructured, multilingual web corpus that provides public access to web archives by regularly crawling and storing webpage data from the Internet. However …","url":["https://arxiv.org/pdf/2402.18041"]} {"year":"2024","title":"Dataverse: Open-Source ETL (Extract, Transform, Load) Pipeline for Large Language Models","authors":["H Park, S Lee, G Gim, Y Kim, D Kim, C Park - arXiv preprint arXiv:2403.19340, 2024"],"snippet":"To address the challenges associated with data processing at scale, we propose Dataverse, a unified open-source Extract-Transform-Load (ETL) pipeline for large language models (LLMs) with a user-friendly design at its core. Easy addition of …","url":["https://arxiv.org/pdf/2403.19340"]} {"year":"2024","title":"Dated Data: Tracing Knowledge Cutoffs in Large Language Models","authors":["J Cheng, M Marone, O Weller, D Lawrie, D Khashabi… - arXiv preprint arXiv …, 2024"],"snippet":"… Our analysis reveals two reasons for these inconsistencies: (1) temporal biases of CommonCrawl data due to non-trivial amounts of old data in new dumps and (2) complications in LLM deduplication schemes involving semantic duplicates and …","url":["https://arxiv.org/pdf/2403.12958"]} {"year":"2024","title":"DE-COP: Detecting Copyrighted Content in Language Models Training Data","authors":["AV Duarte, X Zhao, AL Oliveira, L Li - arXiv preprint arXiv:2402.09910, 2024"],"snippet":"How can we detect if copyrighted content was used in the training process of a language model, considering that the training data is typically undisclosed? We are motivated by the premise that a language model is likely to identify verbatim …","url":["https://arxiv.org/pdf/2402.09910"]} {"year":"2024","title":"Dear GPT-3: Collaborative Writing with Neural Networks","authors":["J Becker - Artificial Intelligence-Intelligent Art?: Human-Machine …, 2024"],"snippet":"GPT-3 and I engaged in this dialogue about four months ago. I wanted to develop a collaborative writing project, not only to collectively realise a novel, but also to negotiate poetological questions which might arise during the process. These …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=gDoGEQAAQBAJ&oi=fnd&pg=PA189&dq=commoncrawl&ots=SRY3txNCwF&sig=VbXixnNRFp15ZSTcqSY4fdZnrCA"]} {"year":"2024","title":"DeBERTa-BiLSTM: A multi-label classification model of arabic medical questions using pre-trained models and deep learning","authors":["BS Al-Smadi - Computers in Biology and Medicine, 2024"],"snippet":"… Using data from CommonCrawl in 100 different languages, the authors in [38] introduce XLM-RoBERTa, an advanced masked multilingual language model. XLM-RoBERTa outperformed previous multilingual models such as XLM and mBERT. [39] …","url":["https://www.sciencedirect.com/science/article/pii/S0010482524000052"]} {"year":"2024","title":"Deconstructing In-Context Learning: Understanding Prompts via Corruption","authors":["N Shivagunde, V Lialin, S Muckatira, A Rumshisky - arXiv preprint arXiv:2404.02054, 2024"],"snippet":"The ability of large language models (LLMs) to \"learn in context\" based on the provided prompt has led to an explosive growth in their use, culminating in the proliferation of AI assistants such as ChatGPT, Claude, and Bard. These AI …","url":["https://arxiv.org/pdf/2404.02054"]} {"year":"2024","title":"Deep learning aided clinical decision support","authors":["R Schneider - 2023"],"snippet":"Medical professionals create vast amounts of clinical texts during patient care. Often, these documents describe medical cases from anamnesis to the final clinical outcome. Automated understanding and selection of relevant medical records pose …","url":["https://elib.uni-stuttgart.de/bitstream/11682/13875/1/20231204%20Thesis%20Deep%20Learning%20aided%20Clinical%20Decision%20Support.pdf"]} {"year":"2024","title":"Deep Learning Based Multi-document Summarization","authors":["C Ma - 2024"],"snippet":"In this era of rapidly advancing technology, the exponential increase of data availability makes analyzing and understanding text files a tedious, labor-intensive, and time-consuming task. Multi-document summarization (MDS) is an effective tool …","url":["https://digital.library.adelaide.edu.au/dspace/bitstream/2440/140499/1/Ma2024_PhD.pdf"]} {"year":"2024","title":"Deep Learning for Cross-Domain Data Fusion in Urban Computing: Taxonomy, Advances, and Outlook","authors":["X Zou, Y Yan, X Hao, Y Hu, H Wen, E Liu, J Zhang, Y Li… - arXiv preprint arXiv …, 2024"],"snippet":"As cities continue to burgeon, Urban Computing emerges as a pivotal discipline for sustainable development by harnessing the power of cross-domain data fusion from diverse sources (eg, geographical, traffic, social media, and environmental data) …","url":["https://arxiv.org/pdf/2402.19348"]} {"year":"2024","title":"Deep Learning Model for Tamil Part-of-Speech Tagging","authors":["H Visuwalingam, R Sakuntharaj, J Alawatugoda… - The Computer Journal, 2024"],"snippet":"… These pre-trained word embeddings are trained on Common Crawl and Wikipedia. This model was trained using CBOW with position-weights, in dimension 300, with character n-grams of length 5, a window of size 5 and 10 negatives. …","url":["https://academic.oup.com/comjnl/advance-article/doi/10.1093/comjnl/bxae033/7641754"]} {"year":"2024","title":"DeepSeek LLM: Scaling Open-Source Language Models with Longtermism","authors":["X Bi, D Chen, G Chen, S Chen, D Dai, C Deng, H Ding… - arXiv preprint arXiv …, 2024"],"snippet":"… Our analysis revealed that deduplicating the entire Common Crawl corpus results in higher removal of duplicate instances compared to deduplicating within a single dump. Table 1 illustrates that deduplicating across 91 dumps eliminates four times …","url":["https://arxiv.org/pdf/2401.02954"]} {"year":"2024","title":"DeepSeek-VL: Towards Real-World Vision-Language Understanding","authors":["H Lu, W Liu, B Zhang, B Wang, K Dong, B Liu, J Sun… - arXiv preprint arXiv …, 2024"],"snippet":"We present DeepSeek-VL, an open-source Vision-Language (VL) Model designed for real-world vision and language understanding applications. Our approach is structured around three key dimensions: We strive to ensure our data is diverse …","url":["https://arxiv.org/pdf/2403.05525"]} {"year":"2024","title":"DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models","authors":["Z Shao, P Wang, Q Zhu, R Xu, J Song, M Zhang, YK Li… - arXiv preprint arXiv …, 2024"],"snippet":"… the DeepSeekMath Corpus from Common Crawl. As depicted in Figure 2, … Common Crawl, we employ URL-based deduplication and near-deduplication techniques, resulting in 40B HTML web pages. We then recall mathematical web …","url":["https://arxiv.org/pdf/2402.03300"]} {"year":"2024","title":"Deliverable 3.1: WP3 1st Interim technical report","authors":["A May-Wachowius–UC, A Lintulampi–UC…"],"snippet":"1.1. Background This document is the first Interim Technical Report (deliverable 3.1) prepared within the WP3 New usecases for the ESSnet Trusted Smart Statistics–Web Intelligence Network project (ESSnet TSS-WIN). The report covers the period from …","url":["https://cros.ec.europa.eu/system/files/2023-12/wp3_deliverable_3_1_wp3_1st_interim_technical_report_20220330.pdf"]} {"year":"2024","title":"Design of Artificial Intelligence Companion Chatbot","authors":["X Chen, J Kang, C Hu"],"snippet":"With the development of cities and the prevalence of networks, interpersonal relationships have become increasingly distant. When people crave communication, they hope to find someone to confide in. With the rapid advancement of deep …","url":["https://cdn.techscience.cn/files/jnm/2024/TSP_JNM-6/TSP_JNM_45833/TSP_JNM_45833.pdf"]} {"year":"2024","title":"Design, Implementation and Evaluation of a Chatbot for Accounting Firm: A Fine-Tuning Approach With Two Novel Dataset","authors":["M Basilico - 2024"],"snippet":"Artificial intelligence, particularly in the field of Chatbots, is fundamentally reshaping learning, communication, and work paradigms. This phenomenon has sparked growing interest among businesses, viewing Chatbots as a means to streamline …","url":["https://webthesis.biblio.polito.it/secure/31058/1/tesi.pdf"]} {"year":"2024","title":"Designing a course for pre-service science teachers using ChatGPT: what ChatGPT brings to the table","authors":["HZ Okulu, N Muslu - Interactive Learning Environments, 2024"],"snippet":"ChatGPT holds significant potential for enhancing learning through integration into education as an advanced chatbot. With the goal of harnessing this potential, our research focused on exploring the utilization of ChatGPT in designing a course plan …","url":["https://www.tandfonline.com/doi/abs/10.1080/10494820.2024.2322462"]} {"year":"2024","title":"Designing an Intelligent System to Map Global Connections","authors":["E Bellamy, K Farrell, A Hopping, J Pinter, M Saju… - 2024"],"snippet":"… Extended Abstract: This study develops a knowledge graph from the Common Crawl News Dataset to provide situational awareness and … We developed a data pipeline to extract semantic content from the Common Crawl News feed and filter it …","url":["https://www.ieworldconference.org/content/WP2024/Papers/GDRKMCC24_2.pdf"]} {"year":"2024","title":"Designing and developing a dedicated Natural Language Processing Framework for Healthcare Information Technology Management and Assessment","authors":["A Luschi - 2024"],"snippet":"The escalating complexity of the hospital environment, propelled by technological advancements, necessitates a comprehensive exploration of the integration and management of diverse tools and technologies in healthcare settings. In this context …","url":["https://flore.unifi.it/bitstream/2158/1353284/1/Luschi_PhD_Thesis.pdf"]} {"year":"2024","title":"Designing Heterogeneous LLM Agents for Financial Sentiment Analysis","authors":["F Xing - arXiv preprint arXiv:2401.05799, 2024"],"snippet":"… GPT-3.5 was trained mainly on the Common Crawl corpus [2], which archives the web. BLOOMZ was trained on an even larger Open-science Open-collaboration Text Sources corpus [19], which is mainly crowd-sourced scientific datasets. The five …","url":["https://arxiv.org/pdf/2401.05799"]} {"year":"2024","title":"Detecting Dementia from Transcribed Speech in Slovak using Pre-trained BERT Models","authors":["J Staš, D Hládek, A Kopnický - 2024 34th International Conference Radioelektronika …, 2024"],"snippet":"… TB of texts written in 100 languages from the Common Crawl dataset [14]; • RemBERT model6: a Rebalanced multilingual BERT model pre-trained on a large unlabeled text created by the combination of mC4 Common Crawl and Wikipedia …","url":["https://ieeexplore.ieee.org/abstract/document/10524067/"]} {"year":"2024","title":"Detecting Hidden Meaning in Stock Images","authors":["P Sülzle"],"snippet":"OBJECTIVE − Automated extraction of hidden meaning in stock images− Is it possible to distinguish what is shown from what is meant?− Examine the divergence of text and image− Analyze the usage of stock images and textual descriptions on the web …","url":["https://downloads.webis.de/theses/slides/suelzle_2023.pdf"]} {"year":"2024","title":"Detection of Hate Speech and Offensive Language CodeMix Text in Dravidian Languages using Cost-Sensitive Learning Approach","authors":["K Sreelakshmi, B Premjith, BR Chakravarthi… - IEEE Access, 2024"],"snippet":"… It was trained using more than two terabytes of filtered CommonCrawl data. It has significantly improved performance on various cross-lingual transfer tasks and outperformed the mBERT model. XLM-R has 24 layers and 16 attention heads …","url":["https://ieeexplore.ieee.org/iel7/6287639/6514899/10419328.pdf"]} {"year":"2024","title":"Detection of phishing addresses and pages with a data set balancing approach by generative adversarial network (GAN) and convolutional neural network (CNN) …","authors":["S Jafari, N Aghaee‐Maybodi - Concurrency and Computation: Practice and …"],"snippet":"Phishing attacks have a remarkable ability to steal user information by using simple techniques. Phishing attacks steal valuable information, such as user names and passwords. The loss caused by phishing attacks is significant, and every year …","url":["https://onlinelibrary.wiley.com/doi/abs/10.1002/cpe.8033"]} {"year":"2024","title":"Detection of Sarcasm in Urdu Tweets using Deep Learning and Transformer based Hybrid Approaches","authors":["ME Hassan, M Hussain, I Maab, U Habib, MA Khan… - IEEE Access, 2024"],"snippet":"Sarcasm has a significant role in human communication especially on social media platforms where users express their sentiments through humor, satire, and criticism. The identification of sarcasm is crucial in comprehending the sentiment and the …","url":["https://ieeexplore.ieee.org/iel7/6287639/6514899/10508575.pdf"]} {"year":"2024","title":"Detection of Twitter Spam Using GLoVe Vocabulary Features, Bidirectional LSTM and Convolution Neural Network","authors":["P Manasa, A Malik, I Batra - SN Computer Science, 2024"],"snippet":"… The training of the GloVe model involves a substantial text corpus, like Wikipedia or Common Crawl, which facilitates the learning of connections between words and phrases. For every word in the corpus, the model creates a vector, wherein each …","url":["https://link.springer.com/article/10.1007/s42979-023-02518-1"]} {"year":"2024","title":"Developing ChatGPT for Biology and Medicine: A Complete Review of Biomedical Question Answering","authors":["Q Li, L Li, Y Li - arXiv preprint arXiv:2401.07510, 2024"],"snippet":"ChatGPT explores a strategic blueprint of question answering (QA) in delivering medical diagnosis, treatment recommendations, and other healthcare support. This is achieved through the increasing incorporation of medical domain data via natural …","url":["https://arxiv.org/pdf/2401.07510"]} {"year":"2024","title":"Development and Evaluation of a German Language Model for the Financial Domain","authors":["N Kozaeva, S Hamotskyi, C Hänig - Proceedings of the Joint Workshop of the 7th …, 2024"],"snippet":"… The German colossal, cleaned Common Crawl corpus15 was employed, comprising texts of varying lengths. Further the results of LM performance for mixed datasets (financial corpus mixed with common language sentences and financial …","url":["https://aclanthology.org/2024.finnlp-1.5.pdf"]} {"year":"2024","title":"Development and Evaluation of Pre-trained Language Models for Historical Danish and Norwegian Literary Texts","authors":["A Al-Laith, A Conroy, J Bjerring-Hansen… - Proceedings of the 2024 …, 2024"],"snippet":"We develop and evaluate the first pre-trained language models specifically tailored for historical Danish and Norwegian texts. Three models are trained on a corpus of 19th-century Danish and Norwegian literature: two directly on the corpus with no …","url":["https://aclanthology.org/2024.lrec-main.431.pdf"]} {"year":"2024","title":"Diagonalization & Forcing FLEX: From Cantor to Cohen and Beyond","authors":["E Moritz - 2024"],"snippet":"Background: Maybe very high energy cosmic radiation started a synaptic butterfly cascade 2... who knows? For one reason or another, I decided to embark on a journey of exploring and discussing the possible role of Artificial Intelligence (AI) on …","url":["https://www.researchgate.net/profile/Elan-Moritz/publication/379960940_Diagonalization_Forcing_FLEX_From_Cantor_to_Cohen_and_Beyond_Alt_Title_Learning_from_Leibniz_Cantor_Turing_Godel_and_Cohen_crawling_towards_AGI/links/662498f743f8df018d1e5573/Diagonalization-Forcing-FLEX-From-Cantor-to-Cohen-and-Beyond-Alt-Title-Learning-from-Leibniz-Cantor-Turing-Goedel-and-Cohen-crawling-towards-AGI.pdf"]} {"year":"2024","title":"Differentially Private Low-Rank Adaptation of Large Language Model Using Federated Learning","authors":["XY Liu, R Zhu, D Zha, J Gao, S Zhong, M Qiu - arXiv preprint arXiv:2312.17493, 2023"],"snippet":"The surge in interest and application of large language models (LLMs) has sparked a drive to fine-tune these models to suit specific applications, such as finance and medical science. However, concerns regarding data privacy have emerged …","url":["https://arxiv.org/pdf/2312.17493"]} {"year":"2024","title":"Digital Threats","authors":["S Samtani, E Raff, H Anderson, E Domschot… - 2024"],"snippet":"… First, we used the common crawl GloVe embedding, created by Stanford [28]. This embedding was trained over 42 billion tokens, with a 300-dimensional vector space, and was used as the embedding layer of the BiLSTM model. Results are …","url":["https://dl.acm.org/doi/pdf/10.1145/3613525"]} {"year":"2024","title":"DiPaCo: Distributed Path Composition","authors":["A Douillard, Q Feng, AA Rusu, A Kuncoro, Y Donchev… - arXiv preprint arXiv …, 2024"],"snippet":"Progress in machine learning (ML) has been fueled by scaling neural network models. This scaling has been enabled by ever more heroic feats of engineering, necessary for accommodating ML approaches that require high bandwidth …","url":["https://arxiv.org/html/2403.10616v1"]} {"year":"2024","title":"Disambiguating Homographs and Homophones Simultaneously: A Regrouping Method for Japanese","authors":["Y Sato - Proceedings of the 2024 Joint International Conference …, 2024"],"snippet":"We present a method that re-groups surface forms into clusters representing synonyms, and help disambiguate homographs as well as homophone. The method is applied post-hoc to trained contextual word embeddings. It is beneficial to …","url":["https://aclanthology.org/2024.lrec-main.442.pdf"]} {"year":"2024","title":"Disambiguating natural language via aligning meaningful descriptions","authors":["Y Xin - 2023"],"snippet":"Artificial Intelligence (AI) technologies are increasingly pervading aspects of our lives. Because people use natural language to communicate with each other, computers should also use natural language to communicate with us. One of the principal …","url":["https://open.bu.edu/bitstream/handle/2144/48024/Xin_bu_0017E_17540.pdf?sequence=6"]} {"year":"2024","title":"Display options","authors":["Z Zhai, C Druckenbrodt, C Thorne"],"snippet":"Chemical patents are a commonly used channel for disclosing novel compounds and reactions, and hence represent important resources for chemical and pharmaceutical research. Key chemical data in patents is often presented in tables …","url":["https://www.uiindex.org/search/articledetails/32298116"]} {"year":"2024","title":"Dissecting Whiteness: consistencies and differences in the stereotypes of lower-and upper-class White US Americans","authors":["T Morgenroth, CT Begeny, TA Kirby, B Paaßen, Y Zeng - Self and Identity, 2024"],"snippet":"Economic inequality is increasing in the United States, making categorization and stereotyping based on social class more likely. Yet, social class stereotypes have received relatively little attention. Focusing on spontaneously generated stereotypes …","url":["https://www.tandfonline.com/doi/abs/10.1080/15298868.2024.2322179"]} {"year":"2024","title":"Distance Comparison Operators for Approximate Nearest Neighbor Search: Exploration and Benchmark","authors":["Z Wang, H Xiong, Z He, P Wang - arXiv preprint arXiv:2403.13491, 2024"],"snippet":"Approximate nearest neighbor search (ANNS) on high-dimensional vectors has become a fundamental and essential component in various machine learning tasks. Prior research has shown that the distance comparison operation is the bottleneck of …","url":["https://arxiv.org/html/2403.13491v1"]} {"year":"2024","title":"Do Language Models Care About Text Quality? Evaluating Web-Crawled Corpora Across 11 Languages","authors":["R van Noord, T Kuzman, P Rupnik, N Ljubešić… - arXiv preprint arXiv …, 2024"],"snippet":"… rest of corpora compared in this paper, MaCoCu corpora are not obtained by processing Common Crawl data. Instead, a strategy consisting of crawling relevant internet top-level domains directly for the targeted languages is followed (eg, .al for …","url":["https://arxiv.org/pdf/2403.08693"]} {"year":"2024","title":"Do Localization Methods Actually Localize Memorized Data in LLMs? A Tale of Two Benchmarks","authors":["TY Chang, J Thomason, R Jia"],"snippet":"The concept of localization in LLMs is often mentioned in prior work; however, methods for localization have never been systematically and directly evaluated. We propose two complementary benchmarks that evaluate the ability of localization …","url":["https://terarachang.github.io/assests/loc.pdf"]} {"year":"2024","title":"Do not shut up and do dribble: social media and TV consumption","authors":["M Pazzona, N Spagnolo - Journal of Population Economics, 2024"],"snippet":"… Footnote 11 The NLP model utilized in this study, XLM-RoBERTa, undergoes pre-training on 2.5TB of filtered CommonCrawl data encompassing 100 languages. For tweet classification, we employed the general-purpose Python library TweetNLP (Camacho-Collados …","url":["https://link.springer.com/article/10.1007/s00148-024-01034-7"]} {"year":"2024","title":"Does Lack of Knowledge and Hardship of Information Access Signify Powerful AI? A Large Language Model Perspective","authors":["IA Zahid, SS Joudar - Applied Data Science and Analysis, 2023"],"snippet":"Large Language Models (LLMs) are evolving and expanding enormously. With the consistent improvement of LLMs, more complex and sophisticated tasks will be tackled. Handling various tasks and fulfilling different queries will be more precise …","url":["https://journals.mesopotamian.press/index.php/ADSA/article/download/235/211"]} {"year":"2024","title":"Does the Language Matter? Curriculum Learning over Neo-Latin Languages","authors":["G Pucci, L Ranaldi - Proceedings of the 2024 Joint International Conference …, 2024"],"snippet":"Curriculum Learning (CL) is emerging as a relevant technique to reduce the cost of pre-training Large Language Models. The idea, tested for the English language, is to train LLMs by organizing training examples from the simplest to the most complex …","url":["https://aclanthology.org/2024.lrec-main.464.pdf"]} {"year":"2024","title":"Does your data spark joy? Performance gains from domain upsampling at the end of training","authors":["C Blakeney, M Paul, BW Larsen, S Owen, J Frankle - arXiv e-prints, 2024"],"snippet":"Pretraining datasets for large language models (LLMs) have grown to trillions of tokens composed of large amounts of CommonCrawl (CC) web scrape along with smaller, domain-specific datasets. It is expensive to understand the impact of these …","url":["https://ui.adsabs.harvard.edu/abs/2024arXiv240603476B/abstract"]} {"year":"2024","title":"Dolma: an Open Corpus of Three Trillion Tokens for Language Model Pretraining Research","authors":["L Soldaini, R Kinney, A Bhagia, D Schwenk, D Atkinson… - arXiv preprint arXiv …, 2024"],"snippet":"… In total, we find 0.02% of documents in the 25 Common Crawl snapshots match this filter. … Head, Middle, and Tail parts of our Common Crawl data. The correlation is computed for 24M, … The correlation among the documents flagged for removal …","url":["https://arxiv.org/pdf/2402.00159"]} {"year":"2024","title":"DPHANet: Discriminative Parallel and Hierarchical Attention Network for Natural Language Video Localization","authors":["R Chen, J Tan, Z Yang, X Yang, Q Dai, Y Cheng, L Lin - IEEE Transactions on …, 2024"],"snippet":"Natural Language Video Localization (NLVL) has recently attracted much attention because of its practical significance. However, the existing methods still face the following challenges: 1) When the models learn intra-modal semantic association …","url":["https://ieeexplore.ieee.org/abstract/document/10517423/"]} {"year":"2024","title":"Dr. Nahid Ebrahimi Majd","authors":["R Ferdaws - 2024"],"snippet":"The issue of web phishing attacks has been more prevalent in recent years, and phishing is one of the riskiest online crimes that can have disastrous consequences. The aim of phishing is to collect confidential information by tricking a target and …","url":["https://scholarworks.calstate.edu/downloads/s1784t388"]} {"year":"2024","title":"DrawL: Understanding the Effects of Non-Mainstream Dialects in Prompted Image Generation","authors":["JN Williams, M FitzMorris, O Aka, S Laszlo - arXiv preprint arXiv:2405.05382, 2024"],"snippet":"… Stable Diffusion is trained in part using the LAION-5B dataset [50], which consists of text-image pairs from Common CrawlCommon Crawl includes archived data from a large variety of sources, including sites such as Reddit and a variety of blogs …","url":["https://arxiv.org/pdf/2405.05382"]} {"year":"2024","title":"DrBenchmark: A Large Language Understanding Evaluation Benchmark for French Biomedical Domain","authors":["Y Labrak, A Bazoge, OE Khettari, M Rouvier, N Grabar… - arXiv preprint arXiv …, 2024"],"snippet":"The biomedical domain has sparked a significant interest in the field of Natural Language Processing (NLP), which has seen substantial advancements with pre-trained language models (PLMs). However, comparing these models has proven …","url":["https://arxiv.org/pdf/2402.13432"]} {"year":"2024","title":"DsDm: Model-Aware Dataset Selection with Datamodels","authors":["L Engstrom, A Feldmann, A Madry - arXiv preprint arXiv:2401.12926, 2024"],"snippet":"When selecting data for training large-scale models, standard practice is to filter for examples that match human notions of data quality. Such filtering yields qualitatively clean datapoints that intuitively should improve model behavior. However, in …","url":["https://arxiv.org/pdf/2401.12926"]} {"year":"2024","title":"Dual Modalities of Text: Visual and Textual Generative Pre-training","authors":["Y Chai, Q Liu, J Xiao, S Wang, Y Sun, H Wu - arXiv preprint arXiv:2404.10710, 2024"],"snippet":"… Meanwhile, The C4 dataset represents a substantial refinement of the Common Crawl corpus. This dataset, derived from the extensive Common Crawl web scrape, undergoes rigorous cleaning and preprocessing to ensure the quality and relevance …","url":["https://arxiv.org/pdf/2404.10710"]} {"year":"2024","title":"Dynamic Few-shot Learning for Computational Social Science","authors":["R Malla, TG Coan, V Srinivasan, C Boussalis"],"snippet":"Large Language Models (LLMs) have revolutionized Natural Language Processing (NLP) and have recently demonstrated remarkable potential in social science research through their capacity to efficiently perform a wide range of text-as-data tasks …","url":["https://files.osf.io/v1/resources/nfhe8/providers/osfstorage/65c1f35335be200988a501a7?action=download&direct&version=1"]} {"year":"2024","title":"Easy Problems That LLMs Get Wrong","authors":["S Williams, J Huckle - arXiv preprint arXiv:2405.19616, 2024"],"snippet":"We introduce a comprehensive Linguistic Benchmark designed to evaluate the limitations of Large Language Models (LLMs) in domains such as logical reasoning, spatial intelligence, and linguistic understanding, among others. Through a series of …","url":["https://arxiv.org/pdf/2405.19616"]} {"year":"2024","title":"Echoes of culture: Relationships of implicit and explicit attitudes with contemporary English, historical English, and 53 non-English languages","authors":["TES Charlesworth, K Morehouse, V Rouduri… - Social Psychological and …, 2024"],"snippet":"Attitudes are intertwined with culture and language. But to what extent? Emerging perspectives in attitude research suggest that cultural representations in language are more related to implicitly measured (vs. explicitly measured) attitudes, and that …","url":["https://journals.sagepub.com/doi/abs/10.1177/19485506241256400"]} {"year":"2024","title":"Efficiency Comparison of Dataset Generated by LLMs using Machine Learning Algorithms","authors":["P Pawade, M Kulkarni, S Naik, A Raut, KS Wagh - 2024 International Conference on …, 2024"],"snippet":"… • Bing Chat was trained on web text data, including Common Crawl, Wikipedia, news articles, social media posts, and other publicly available text, via a technique called denoising autoencoder (DAE). In DAE, the model is given a noisy version of a …","url":["https://ieeexplore.ieee.org/abstract/document/10497340/"]} {"year":"2024","title":"Efficient learning in spiking neural networks","authors":["A Rast, MA Aoun, EG Elia, N Crook - Neurocomputing, 2024"],"snippet":"… Training data required is similarly massive: the CommonCrawl dataset is 242 TB in size, and hyperparameter tuning remains an expensive, search-based process that can require ∼ 1000 − 10 , 000 + sweeps. After evaluating a number of possible …","url":["https://www.sciencedirect.com/science/article/pii/S0925231224007331"]} {"year":"2024","title":"Efficient Model-Relational Data Management: Challenges and Opportunities","authors":["V Sanca, A Ailamaki - IEEE Transactions on Knowledge and Data …, 2024"],"snippet":"As modern data pipelines continue to collect, produce, and store various data formats, extracting and combining value from traditional and context-rich sources becomes unsuitable for RDBMS. To tap into the dark data, domain experts analyze …","url":["https://ieeexplore.ieee.org/abstract/document/10488724/"]} {"year":"2024","title":"Efficient Training and Inference: Techniques for Large Language Models Using Llama","authors":["SR Cunningham, D Archambault, A Kung"],"snippet":"… The training dataset consisted of a mixture of the Common Crawl, Wikipedia, and BooksCorpus datasets, encompassing a diverse range of linguistic structures and vocabularies. For evaluation, the GLUE benchmark suite was utilized, providing a …","url":["https://www.techrxiv.org/doi/pdf/10.36227/techrxiv.171651876.65094225"]} {"year":"2024","title":"Eliciting Big Five Personality Traits in Large Language Models: A Textual Analysis with Classifier-Driven Approach","authors":["A Hilliard, C Munoz, Z Wu, AS Koshiyama - arXiv preprint arXiv:2402.08341, 2024"],"snippet":"… GPT-3 followed, with 175 billion parameters and training on datasets like Common Crawl [15]. GPT-3.5 Turbo was introduced to enhance real-time performance, and the latest, GPT-4, boasts 1.76 trillion parameters and capabilities …","url":["https://arxiv.org/pdf/2402.08341"]} {"year":"2024","title":"EMBA: Entity Matching using Multi-Task Learning of BERT with Attention-over-Attention","authors":["J Zhang, H Sun, JC Ho - 2024"],"snippet":"… The WDC Product Data Corpus for Large-scale Product Matching [34], was built by extracting product offers from Common Crawl. The WDC datasets serve as a popular benchmark and have been used for evaluation in DITTO, JointBERT, and …","url":["https://openproceedings.org/2024/conf/edbt/paper-76.pdf"]} {"year":"2024","title":"Embedding-based Query Spelling Correction","authors":["I Zelch, G Lahmann, M Hagen - 2024"],"snippet":"For many retrieval systems, correcting spelling errors in the queries that searchers submit is an essential step of query understanding. Inspired by a blog post on spelling correction from 2018, we implement and analyze a simple embedding-based …","url":["https://downloads.webis.de/publications/papers/zelch_2024b.pdf"]} {"year":"2024","title":"Emergent Abilities in Reduced-Scale Generative Language Models","authors":["S Muckatira, V Deshpande, V Lialin, A Rumshisky - arXiv preprint arXiv:2404.02204, 2024"],"snippet":"Large language models can solve new tasks without task-specific fine-tuning. This ability, also known as in-context learning (ICL), is considered an emergent ability and is primarily seen in large language models with billions of parameters. This …","url":["https://arxiv.org/html/2404.02204v1"]} {"year":"2024","title":"Emerging AI Tools for Education and Research: Perspective and Policies for IISc","authors":["GJ OC, AK CSA, V Kumar, YN CSA…"],"snippet":"The Director, IISc, constituted a committee on August 21, 2023 with the following mandate:(a) Explore the challenges and benefits of emerging AI tools in the context of academic teaching and learning,(b) provide guidance and recommendations to …","url":["https://iisc.ac.in/wp-content/uploads/2024/03/Report-of-Committee-on-AI-Tools-for-Education-and-Research.pdf"]} {"year":"2024","title":"Emotional significance in Cross-Cultural Semantic Crossmodal Correspondences","authors":["J Alvarado - Proceedings of the Annual Meeting of the Cognitive …, 2024"],"snippet":"Crossmodal correspondences are associations between perceptual features from different senses that aid in crossmodal binding. The semantic coding of these correspondences is expected to capture and mediate the emergence of perceptual …","url":["https://escholarship.org/content/qt2sc9n8q9/qt2sc9n8q9.pdf"]} {"year":"2024","title":"Employing Siamese MaLSTM Model and ELMO Word Embedding for Quora Duplicate Questions Detection","authors":["A Altamimi, M Umer, D Hanif, S Alsubai, TH Kim… - IEEE Access, 2024"],"snippet":"… 3) FastText Subword FastText Subword comprises a collection of 2 million word vectors trained on the Common Crawl dataset, which consists of a massive 600 billion tokens. In contrast to traditional word embeddings, sub-word embeddings …","url":["https://ieeexplore.ieee.org/iel7/6287639/6514899/10440276.pdf"]} {"year":"2024","title":"Empowering hate speech detection: leveraging linguistic richness and deep learning","authors":["IGBJ Abasan, EB Setiawan - Bulletin of Electrical Engineering and Informatics, 2024"],"snippet":"… What they don’t realize is the use of pre-trained models, namely CommonCrawl and Wiki, in extraction using FastText or GloVe. The pre-trained model used is not specific for hate speech detection and, of course, consists of many languages, and …","url":["https://www.beei.org/index.php/EEI/article/download/6938/3660"]} {"year":"2024","title":"Enabling action crossmodality for a pretrained large language model","authors":["A Caesar, O Özdemir, C Weber, S Wermter - Natural Language Processing Journal, 2024"],"snippet":"Natural language processing and vision tasks have seen large improvements recently through the rise of Transformer architectures. The high performing large language models (LLMs) benefit from large textual datasets that are numerously …","url":["https://www.sciencedirect.com/science/article/pii/S2949719124000207"]} {"year":"2024","title":"Enabling High-Sparsity Foundational Llama Models with Efficient Pretraining and Deployment","authors":["A Agarwalla, A Gupta, A Marques, S Pandit, M Goin… - arXiv preprint arXiv …, 2024"],"snippet":"Large language models (LLMs) have revolutionized Natural Language Processing (NLP), but their size creates computational bottlenecks. We introduce a novel approach to create accurate, sparse foundational versions of performant LLMs that achieve full …","url":["https://arxiv.org/pdf/2405.03594"]} {"year":"2024","title":"Enabling self-identification in intelligent agent: insights from computational psychoanalysis","authors":["L Li, C Li - arXiv preprint arXiv:2403.07664, 2024"],"snippet":"Building upon prior framework of computational Lacanian psychoanalysis with the theory of active inference, this paper aims to further explore the concept of self-identification and its potential applications. Beginning with two classic paradigms in psychology …","url":["https://arxiv.org/pdf/2403.07664"]} {"year":"2024","title":"End to End Urdu Abstractive Text Summarization with Dataset and Improvement in Evaluation Metric","authors":["H Raza, W Shahzad - IEEE Access, 2024"],"snippet":"Urdu, being a common language in South Asia, has not received significant attention in terms of language processing compared to more advanced languages. In the field of Natural Language Processing (NLP), the task of text summarization …","url":["https://ieeexplore.ieee.org/iel7/6287639/6514899/10472483.pdf"]} {"year":"2024","title":"EnhancedBERT: A Feature-rich Ensemble Model for Arabic Word Sense Disambiguation with Statistical Analysis and Optimized Data Collection","authors":["S Kaddoura, R Nassar - Journal of King Saud University-Computer and …, 2024"],"snippet":"Accurate assignment of meaning to a word based on its context, known as Word Sense Disambiguation (WSD), remains challenging across languages. Extensive research aims to develop automated methods for determining word senses in …","url":["https://www.sciencedirect.com/science/article/pii/S1319157823004652"]} {"year":"2024","title":"Enhancing 3D Asset Retrieval with Semantic Search","authors":["SA FatemiJahromi - 2024"],"snippet":"Semantic search goes beyond exact keyword matching and instead focuses on the comprehension of search queries' purpose and the surrounding context to deliver relevant search results. In this work, we focus on integrating hybrid search, which …","url":["https://aaltodoc.aalto.fi/bitstreams/76d339fd-de51-4bb0-8a8c-e43d799b7023/download"]} {"year":"2024","title":"ENHANCING AUTOMATIC INVOICE CODING PERFORMANCE WITH UNSTRUCTURED DATA","authors":["T Eerola - 2024"],"snippet":"… The published word vectors are 300 dimensional and trained on the internet data from Common Crawl3. The pretrained word vectors can be further fine-tuned using the provided code and datasets more similar to the final usage scenario to increase …","url":["https://trepo.tuni.fi/bitstream/handle/10024/155981/EerolaTeemu.pdf?sequence=2"]} {"year":"2024","title":"Enhancing Cybersecurity: A Review and Comparative Analysis of Convolutional Neural Network Approaches for Detecting URL-Based Phishing Attacks","authors":["M Nanda, M Saraswat, PK Sharma - e-Prime-Advances in Electrical Engineering …, 2024"],"snippet":"Phishing attempts to mimic the official websites of businesses, including banks, e-commerce, government offices, and financial institutions. Phishing websites aim to collect and retrieve sensitive data from users, including passwords, credit card numbers, email …","url":["https://www.sciencedirect.com/science/article/pii/S2772671124001153"]} {"year":"2024","title":"Enhancing Ecological Knowledge Discovery Using Large Language Models","authors":["V Domazetoski"],"snippet":"Earth is home to an astonishing diversity of life forms. Among these, vascular plants stand as one of the most vital and ubiquitous groups, accounting for approximately 80% of global biomass [3]. The census of vascular plants, which presently exceeds …","url":["https://gipplab.org/wp-content/papercite-data/pdf/domazetoski2024.pdf"]} {"year":"2024","title":"Enhancing English Translation Quality Assessment through Knowledge Transfer in Artificial Intelligence Context","authors":["X Zhao - 2024"],"snippet":"Abstract Machine translation technology, which employs computers to autonomously convert text between source and target languages, represents a pivotal realm within artificial intelligence and natural language processing research. This paper …","url":["https://www.researchsquare.com/article/rs-4483708/latest.pdf"]} {"year":"2024","title":"Enhancing knowledge graphs with microdata and LLMs: the case of Schema. org and Wikidata in touristic information","authors":["L Gonzalez-Garcia, G González-Carreño… - The Electronic Library, 2024"],"snippet":"… We have used the Common Crawl (CC) corpus of crawled Web pages to extract metadata published in the wild, and then used a mapping of Schema.org annotations to Wikidata to get estimates of the added value (in terms of additional …","url":["https://www.emerald.com/insight/content/doi/10.1108/EL-06-2023-0160/full/html"]} {"year":"2024","title":"Enhancing Parameter Efficiency in Model Inference Using an Ultralight Inter-Transformer Linear Structure","authors":["H Shi, T Sakai - IEEE Access, 2024"],"snippet":"… It was constructed from the Chuweb21 generated based on the April 2021 block of the Common Crawl dataset. The WWW-4 dataset comprises 50 queries. Robust04 dataset is also an English web corpus generated from TREC disks 4 and 5, and …","url":["https://ieeexplore.ieee.org/iel7/6287639/10380310/10474022.pdf"]} {"year":"2024","title":"Enhancing Phishing Detection, Leveraging Deep Learning Techniques","authors":["A Ullah, RA Shah, SA Nawaz, N Ahmad, MH Malik - Journal of Computing & …, 2024"],"snippet":"… To achieve this, we utilized the Common Crawl Corpus, which aggregates extensive data collected over seven years. This corpus comprises raw web page data, metadata, and textual content. Legitimate domains with substantial backlink …","url":["https://jcbi.org/index.php/Main/article/download/340/252"]} {"year":"2024","title":"Enhancing Product Design through AI-Driven Sentiment Analysis of Amazon Reviews Using BERT","authors":["MK Shaik Vadla, MA Suresh, VK Viswanathan - Algorithms, 2024"],"snippet":"Understanding customer emotions and preferences is paramount for success in the dynamic product design landscape. This paper presents a study to develop a prediction pipeline to detect the aspect and perform sentiment analysis on review …","url":["https://www.mdpi.com/1999-4893/17/2/59"]} {"year":"2024","title":"Enhancing Sentiment Analysis Accuracy in Borobudur Temple Visitor Reviews through Semi-Supervised Learning and SMOTE Upsampling","authors":["C Agustina, P Purwanto, F Farikhin - Journal of Advances in Information Technology, 2024"],"snippet":"The level of visitor satisfaction with tourist destinations can be known from reviews on social media. One method used is to carry out sentiment analysis on comments given by visitors on social media or related websites. This study was envisioned as a …","url":["https://www.jait.us/uploadfile/2024/JAIT-V15N4-492.pdf"]} {"year":"2024","title":"Enhancing Sentiment Analysis on Social Media Data with Advanced Deep Learning Techniques.","authors":["HH Nguyen - International Journal of Advanced Computer Science & …, 2024"],"snippet":"This paper introduces a comprehensive methodology for conducting sentiment analysis on social media using advanced deep learning techniques to address the unique challenges of this domain. As digital platforms play an increasingly pivotal …","url":["https://search.ebscohost.com/login.aspx?direct=true&profile=ehost&scope=site&authtype=crawler&jrnl=2158107X&AN=177684405&h=gXTVgMzshbdYbIB8QB5iY2khYefExufWIzwbWX54fXIt%2Bfzrlh8r7t0tSXHbiXxI0typ97LA8y%2FqCMzmDhr%2FjA%3D%3D&crl=c"]} {"year":"2024","title":"Enhancing Vision-Language Pre-training with Rich Supervisions","authors":["Y Gao, K Shi, P Zhu, E Belval, O Nuriel, S Appalaraju… - arXiv preprint arXiv …, 2024"],"snippet":"… This is expected as our data collected from Common Crawl has different distribution against the original Pix2struct data due to the data size (2M vs. 80M) and potentially different website filtering strategies. Specifically, we filter out website …","url":["https://arxiv.org/pdf/2403.03346"]} {"year":"2024","title":"Enriching Interactive Explanations with Fuzzy Temporal Constraint Networks","authors":["M Canabal-Juanatey, JM Alonso-Moral, A Catala… - International Journal of …, 2024"],"snippet":"… such as BERT [2] (Bidirectional Encoder Representations from Transformers), GPT [26] (Generative Pre-trained Transformer) and their successors (ALBERT [4], RoBERTa [3], BART [7], GPT-3 [5] or GPT-4 [27]), which were trained with huge …","url":["https://www.sciencedirect.com/science/article/pii/S0888613X2400015X"]} {"year":"2024","title":"EpilepsyLLM: Domain-Specific Large Language Model Fine-tuned with Epilepsy Medical Knowledge","authors":["X Zhao, Q Zhao, T Tanaka - arXiv preprint arXiv:2401.05908, 2024"],"snippet":"With large training datasets and massive amounts of computing sources, large language models (LLMs) achieve remarkable performance in comprehensive and generative ability. Based on those powerful LLMs, the model fine-tuned with domain-specific …","url":["https://arxiv.org/pdf/2401.05908"]} {"year":"2024","title":"Ethics of AI in the Teaching of English","authors":["A Piotrowski"],"snippet":"New technologies have always impacted literacy, and artificial intelligence (AI) is no exception (Tyner, 1998). English teachers find themselves having to consider how AI may impact how they teach. NCTE’s Beliefs for Integrating Technology Into the …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=HtIDEQAAQBAJ&oi=fnd&pg=PA221&dq=commoncrawl&ots=Wd6CuXLoH8&sig=-k9bfBVB4zHhmTOBXH6OEGq0Xws"]} {"year":"2024","title":"EthioLLM: Multilingual Large Language Models for Ethiopian Languages with Task Evaluation","authors":["AL Tonja, IA Azime, TD Belay, MG Yigezu… - arXiv preprint arXiv …, 2024"],"snippet":"Large language models (LLMs) have gained popularity recently due to their outstanding performance in various downstream Natural Language Processing (NLP) tasks. However, low-resource languages are still lagging behind current state-of-the-art …","url":["https://arxiv.org/pdf/2403.13737"]} {"year":"2024","title":"EvalQuiz: self-assessment generated through language transformer models","authors":["J Kieslinger - 2023"],"snippet":"This thesis explores the constraints of self-assessment creation in higher education, focusing on the lack of tools, standardization, and time. We conduct didactic field expert interviews to investigate how teaching evolves towards learning management …","url":["http://elib.uni-stuttgart.de/bitstream/11682/13953/1/Julian_Kieslinger_Bachelorarbeit_Abgabe_offiziell.pdf"]} {"year":"2024","title":"Evaluating and improving lexical language understanding in neural machine translation","authors":["D Emelin - 2024"],"snippet":"Lexical understanding is an inalienable component of the translation process. In order to correctly map the meaning of a linguistic unit to the appropriate target language expression, the meaning of its constituent words has first to be identified …","url":["https://era.ed.ac.uk/bitstream/handle/1842/41561/EmelinD_2024.pdf?sequence=1"]} {"year":"2024","title":"Evaluating and Mitigating Limitations of Large Language Models in Clinical Decision Making","authors":["P Hager, F Jungmann, K Bhagat, I Hubrecht, M Knauer… - medRxiv, 2024"],"snippet":"Clinical decision making is one of the most impactful parts of a physician's responsibilities and stands to benefit greatly from AI solutions and large language models (LLMs) in particular. However, while LLMs have achieved excellent …","url":["https://www.medrxiv.org/content/medrxiv/early/2024/01/26/2024.01.26.24301810.full.pdf"]} {"year":"2024","title":"Evaluating Differential Privacy Approaches for Query Obfuscation in Information Retrieval","authors":["G Faggioli, N Ferro - 2023"],"snippet":"Protecting the privacy of a user while they interact with an Information Retrieval (IR) system is crucial. This becomes more challenging when the IR system is not cooperative in satisfying the user’s privacy needs. Recent advancements in Natural …","url":["https://ceur-ws.org/Vol-3643/paper5.pdf"]} {"year":"2024","title":"Evaluating embedded semantics for accessibility description of web crawl data","authors":["R Navarrete, D Martinez-Mosquera, L Recalde… - AHFE (2023) International …, 2023"],"snippet":"… The dataset for analysis is provided by Web Data Commons (WDC), an organization that releases extracted data from Common Crawl (CC), the largest web corpus available to the public. The dataset was released in October 2021 …","url":["https://www.researchgate.net/profile/Diana-Martinez-Mosquera/publication/372284128_Evaluating_embedded_semantics_for_accessibility_description_of_web_crawl_data/links/658ee9706f6e450f19b4b6a5/Evaluating-embedded-semantics-for-accessibility-description-of-web-crawl-data.pdf"]} {"year":"2024","title":"Evaluating English-language morphological awareness assessments","authors":["CL Hudson Kam, E Sadlier-Brown, S Clark, C Jang… - First Language, 2024"],"snippet":"… These corpora were selected because they contain language that children are exposed to, and so, provide a more realistic picture of children’s potential knowledge than an adult-language based corpus would (eg the Enron email dataset …","url":["https://journals.sagepub.com/doi/pdf/10.1177/01427237241245500"]} {"year":"2024","title":"Evaluating Large Language Model Performance on Haskell","authors":["A Chen - 2024"],"snippet":"I introduce HaskellEval, a Haskell evaluation benchmark for Large Language Models. HaskellEval’s curation leverages a novel synthetic generation framework, streamlining the process of dataset curation by minimizing manual intervention. The …","url":["https://scholarworks.wm.edu/cgi/viewcontent.cgi?article=3209&context=honorstheses"]} {"year":"2024","title":"Evaluating Large Language Models for Generalization and Robustness via Data Compression","authors":["Y Li, Y Guo, F Guerin, C Lin - arXiv preprint arXiv:2402.00861, 2024"],"snippet":"Existing methods for evaluating large language models face challenges such as data contamination, sensitivity to prompts, and the high cost of benchmark creation. To address this, we propose a lossless data compression based evaluation …","url":["https://arxiv.org/pdf/2402.00861"]} {"year":"2024","title":"Evaluating Large Language Models on Academic Literature Understanding and Review: An Empirical Study among Early-stage Scholars","authors":["J Wang, H Hu, Z Wang, Y Song, Y Sheng, D He - 2024"],"snippet":"The rapid advancement of large language models (LLMs) such as ChatGPT makes LLM-based academic tools possible. However, little research has empirically evaluated how scholars perform different types of academic tasks with LLMs …","url":["https://personal.hkust-gz.edu.cn/hedengbo/assets/publicationPDFs/Wang_CHI_2024a.pdf"]} {"year":"2024","title":"Evaluating Multilingual Abstractive Dialogue Summarization in Indian Languages using mT5-small & IndicBART","authors":["M Sharma, G Goyal, A Gupta, R Rani, A Sharma, A Dev - 2024 IEEE 9th International …, 2024"],"snippet":"… The mC4 dataset consists of natural text in 101 languages including Indian languages like Hindi, Mararti, Punjabi and others, that was collected from the public Common Crawl web scrape. Introduced as a multilingual, sequence-to-sequence pre-trained …","url":["https://ieeexplore.ieee.org/abstract/document/10543588/"]} {"year":"2024","title":"Evaluating Shortest Edit Script Methods for Contextual Lemmatization","authors":["O Toporkov, R Agerri - arXiv preprint arXiv:2403.16968, 2024"],"snippet":"Modern contextual lemmatizers often rely on automatically induced Shortest Edit Scripts (SES), namely, the number of edit operations to transform a word form into its lemma. In fact, different methods of computing SES have been proposed as an …","url":["https://arxiv.org/pdf/2403.16968"]} {"year":"2024","title":"Evaluating the Experience of LGBTQ+ People Using Large Language Model Based Chatbots for Mental Health Support","authors":["Z Ma, Y Mei, Y Long, Z Su, KZ Gajos - arXiv preprint arXiv:2402.09260, 2024"],"snippet":"LGBTQ+ individuals are increasingly turning to chatbots powered by large language models (LLMs) to meet their mental health needs. However, little research has explored whether these chatbots can adequately and safely provide tailored support …","url":["https://arxiv.org/pdf/2402.09260"]} {"year":"2024","title":"Evaluating the Factuality of Zero-shot Summarizers Across Varied Domains","authors":["S Ramprasad, K Krishna, ZC Lipton, BC Wallace - arXiv preprint arXiv:2402.03509, 2024"],"snippet":"Recent work has shown that large language models (LLMs) are capable of generating summaries zero-shot (ie, without explicit supervision) that, under human assessment, are often comparable or even preferred to manually composed …","url":["https://arxiv.org/pdf/2402.03509"]} {"year":"2024","title":"Evaluating the impact of design decisions on passive DNS-based domain rankings","authors":["V Le Pochat, S Fernandez, T Van Goethem…"],"snippet":"… They recommend that researchers select a random sample of websites among Common Crawl hosts. Ruth et al. [6] compared top lists to Cloudflare traffic data, concluding that the Chrome User Experience Report [11] most accurately represents …","url":["https://lepoch.at/files/domain-ranking-design-decisions-tma24.pdf"]} {"year":"2024","title":"Evaluating the Quality of AI-Generated Items for a Certification Exam","authors":["AD Mead, C Zhou - Journal of Applied Testing Technology, 2024"],"snippet":"OpenAI’s GPT-3 model can write multiple-choice exam items. This paper reviewed the literature on automatic item generation and then described the recent history of OpenAI GPT models and their operation, and then described a methodology for …","url":["http://jattjournal.net/index.php/atp/article/download/173204/117130"]} {"year":"2024","title":"Evaluating Transformers and Linguistic Features integration for Author Profiling tasks in Spanish","authors":["JA García-Díaz, G Beydoun, R Valencia-García - Data & Knowledge Engineering, 2024"],"snippet":"… This model is a training from scratch of RoBERTa, trained on the Spanish texts from mC4, compiled from the public Common Crawl web … This is a multilingual version of RoBERTa, pre-trained with about 2.5 TB of data from the 100 different …","url":["https://www.sciencedirect.com/science/article/pii/S0169023X24000314"]} {"year":"2024","title":"Evaluation and Adaptation of Neural Language Models for Under-Resourced Languages","authors":["W de Vries - 2024"],"snippet":"Abstract Language models are now commonly used by researchers, industry, and anyone interested. However, language models of all sizes and types are primarily developed for the English language while efforts on other languages lag behind …","url":["https://research.rug.nl/files/993731020/Complete_thesis.pdf"]} {"year":"2024","title":"Evaluation of AI-generated Responses by Different Artificial Intelligence Chatbots to the Clinical Decision-Making Case-Based Questions in Oral and Maxillofacial …","authors":["A Azadi, F Gorjinejad, H Mohammad-Rahimi, R Tabrizi… - Oral Surgery, Oral Medicine …, 2024"],"snippet":"Objectives This study aims to evaluate the correctness of the generated answers by Google Bard, GPT-3.5, GPT-4, Claude-Instant, and Bing chatbots to decision-making clinical questions in the oral and maxillofacial surgery (OMFS) area. Study Design A …","url":["https://www.sciencedirect.com/science/article/pii/S2212440324000956"]} {"year":"2024","title":"Evaluation of Geographical Distortions in Language Models: A Crucial Step Towards Equitable Representations","authors":["R Decoupes, R Interdonato, M Roche, M Teisseire… - arXiv preprint arXiv …, 2024"],"snippet":"Language models now constitute essential tools for improving efficiency for many professional tasks such as writing, coding, or learning. For this reason, it is imperative to identify inherent biases. In the field of Natural Language Processing …","url":["https://arxiv.org/pdf/2404.17401"]} {"year":"2024","title":"Evaluation of word embedding models used for diachronic semantic change analysis","authors":["Y Maslennikova, V Bochkarev - Journal of Physics: Conference Series, 2024"],"snippet":"… As a reference word2vec model, we used pre-trained word vectors that were trained on English versions of Common Crawl dataset using fastText library [14]. fastText is a library for efficient learning of word representations and sentence …","url":["https://iopscience.iop.org/article/10.1088/1742-6596/2701/1/012082/pdf"]} {"year":"2024","title":"Evidence for a digital divide? Measuring DNS dependencies in the context of the indigenous population of Australia","authors":["R Holz, N Nazemi, O Tavallaie, AY Zomaya"],"snippet":"We recently presented a work-in-progress paper at the Workshop on Transparency, Accountability and User Control for a Responsible Internet (TAURIN 2023). Our paper investigates the relationship between the digital divide, Internet transparency …","url":["https://www.ietf.org/slides/slides-biasws-evidence-for-a-digital-divide-measuring-dns-dependencies-in-the-context-of-the-indigenous-population-of-australia-00.pdf"]} {"year":"2024","title":"EXAMINING ACCURACY HETEROGENEITIES IN CLASSIFICATION OF MULTILINGUAL AI-GENERATED TEXT","authors":["R Subramaniam"],"snippet":"Tools for detection of AI-generated texts are used globally, however, the nature of the apparent accuracy disparities between languages must be further observed. This paper aims to examine the nature of these differences through testing OpenAI’s …","url":["https://csitcp.org/paper/13/1312csit21.pdf"]} {"year":"2024","title":"Experimental Evaluation of Possible Feature Combinations for the Detection of Fraudulent Online Shops","authors":["A Janavičiūtė, A Liutkevičius, G Dabužinskas… - Applied Sciences, 2024"],"snippet":"Online shopping has become a common and popular form of shopping, so online attackers try to extract money from customers by creating online shops whose purpose is to compel the buyer to disclose credit card details or to pay money for …","url":["https://www.mdpi.com/2076-3417/14/2/919"]} {"year":"2024","title":"Explainable Generative AI (GenXAI): A Survey, Conceptualization, and Research Agenda","authors":["J Schneider - arXiv preprint arXiv:2404.09554, 2024"],"snippet":"Generative AI (GenAI) marked a shift from AI being able to recognize to AI being able to generate solutions for a wide variety of tasks. As the generated solutions and applications become increasingly more complex and multi-faceted, novel needs …","url":["https://arxiv.org/pdf/2404.09554"]} {"year":"2024","title":"Exploring LLMs' Capabilities for Error Detection in Dutch L1 and L2 Writing Products","authors":["J Kruijsbergen, S Van Geertruyen, V Hoste… - Computational Linguistics in …, 2024"],"snippet":"This research examines the capabilities of Large Language Models for writing error detection, which can be seen as a first step towards automated writing support. Our work focuses on Dutch writing error detection, targeting two envisaged end-users …","url":["https://www.clinjournal.org/clinj/article/download/179/195"]} {"year":"2024","title":"Exploring Machine Learning for Malware Detection With Feature Selection, Explainable AI, and Generative Adversarial Networks","authors":["DQ Smith Jr - 2023"],"snippet":"This research focuses on the detection of malware in sample datasets using machine learning algorithms. As malware becomes more advanced against detection in the cybersecurity sector, researchers have looked for other methods to …","url":["https://search.proquest.com/openview/3851d57453e23cecb2a424ea1df4e607/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2024","title":"Exploring the competence of ChatGPT for customer and patient service management","authors":["A Haleem, M Javaid, RP Singh - Intelligent Pharmacy, 2024"],"snippet":"The modern language generation model ChatGPT, created by Open Artificial Intelligence (AI), is recognised for its capacity to comprehend context and produce pertinent content. This model is built on the transformer architecture, which enables …","url":["https://www.sciencedirect.com/science/article/pii/S2949866X24000480"]} {"year":"2024","title":"Exploring the Reasoning Abilities of Multimodal Large Language Models (MLLMs): A Comprehensive Survey on Emerging Trends in Multimodal Reasoning","authors":["Y Wang, W Chen, X Han, X Lin, H Zhao, Y Liu, B Zhai… - arXiv preprint arXiv …, 2024"],"snippet":"Strong Artificial Intelligence (Strong AI) or Artificial General Intelligence (AGI) with abstract reasoning ability is the goal of next-generation AI. Recent advancements in Large Language Models (LLMs), along with the emerging field of Multimodal Large …","url":["https://arxiv.org/pdf/2401.06805"]} {"year":"2024","title":"Exploring the Robustness of Task-oriented Dialogue Systems for Colloquial German Varieties","authors":["E Artemova, V Blaschke, B Plank - arXiv preprint arXiv:2402.02078, 2024"],"snippet":"Mainstream cross-lingual task-oriented dialogue (ToD) systems leverage the transfer learning paradigm by training a joint model for intent recognition and slot-filling in English and applying it, zero-shot, to other languages. We address a gap in prior …","url":["https://arxiv.org/pdf/2402.02078"]} {"year":"2024","title":"Exploring the Role of ChatGPT in Cardiology: A Systematic Review of the Current Literature","authors":["A Sharma, T Medapalli, M Alexandrou, E Brilakis… - Cureus, 2024"],"snippet":"Chat Generative Pre-Trained Transformer (ChatGPT) is a chatbot based on a large language model that has gained public interest since its release in November 2022. This systematic review examines the current literature on the potential applications …","url":["https://www.cureus.com/articles/244400-exploring-the-role-of-chatgpt-in-cardiology-a-systematic-review-of-the-current-literature.pdf"]} {"year":"2024","title":"Exploring the role of ChatGPT in higher education: opportunities, challenges and ethical considerations","authors":["A Zeb, R Ullah, R Karim - The International Journal of Information and Learning …, 2024"],"snippet":"Purpose This paper aims to examine the opportunities and challenges of using ChatGPT in higher education. Furthermore, it is also discuss the potential risks and plunders of these tools. Design/methodology/approach The paper discuss the use of …","url":["https://www.emerald.com/insight/content/doi/10.1108/IJILT-04-2023-0046/full/html"]} {"year":"2024","title":"EXPRESS: Malay Lexicon Project 3: The Impact of Orthographic-Semantic Consistency on Lexical Decision Latencies","authors":["M Maziyah Mohamed, D Jared - Quarterly Journal of Experimental Psychology, 2024"],"snippet":"Theories of word processing propose that readers are sensitive to statistical co-occurrences between spelling and meaning. Orthographic-Semantic Consistency (OSC) measures provide a continuous estimate of the statistical regularities between …","url":["https://journals.sagepub.com/doi/abs/10.1177/17470218241234668"]} {"year":"2024","title":"Extending the Comparative Argumentative Machine: Multilingualism and Stance Detection","authors":["I Nikishina, A Bondarenko, S Zaczek, OL Haag…"],"snippet":"The comparative argumentative machine CAM can retrieve arguments that answer comparative questions—questions that ask which of several to-be-compared options should be favored in some scenario. In this paper, we describe how we equipped …","url":["https://downloads.webis.de/publications/papers/nikishina_2024.pdf"]} {"year":"2024","title":"Extending Translate-Train for ColBERT-X to African Language CLIR","authors":["E Yang, DJ Lawrie, P McNamee, J Mayfield - arXiv preprint arXiv:2404.08134, 2024"],"snippet":"… We used Common Crawl documents in Afriberta Corpus [6] for the four African languages and collected additional English Common Crawl documents to match the genre. We fine-tune the model for 200,000 update steps using a learning rate of 1×10-5 …","url":["https://arxiv.org/pdf/2404.08134"]} {"year":"2024","title":"Extracting intersectional stereotypes from embeddings: Developing and validating the Flexible Intersectional Stereotype Extraction procedure","authors":["TES Charlesworth, K Ghate, A Caliskan, MR Banaji - PNAS Nexus, 2024"],"snippet":"… text corpora, ranging from static GloVe embeddings trained on 840 billion words from the Common Crawl to contextualized BERT embeddings trained on a combination of Wikipedia and Common Crawl text. Additionally, in supplemental …","url":["https://academic.oup.com/pnasnexus/article/3/3/pgae089/7626925"]} {"year":"2024","title":"Factuality of Large Language Models in the Year 2024","authors":["Y Wang, M Wang, MA Manzoor, G Georgiev, RJ Das… - arXiv preprint arXiv …, 2024"],"snippet":"Large language models (LLMs), especially when instruction-tuned for chat, have become part of our daily lives, freeing people from the process of searching, extracting, and integrating information from multiple sources by offering a …","url":["https://arxiv.org/pdf/2402.02420"]} {"year":"2024","title":"FairDeDup: Detecting and Mitigating Vision-Language Fairness Disparities in Semantic Dataset Deduplication","authors":["E Slyman, S Lee, S Cohen, K Kafle - arXiv preprint arXiv:2404.16123, 2024"],"snippet":"Recent dataset deduplication techniques have demonstrated that content-aware dataset pruning can dramatically reduce the cost of training Vision-Language Pretrained (VLP) models without significant performance losses compared to …","url":["https://arxiv.org/pdf/2404.16123"]} {"year":"2024","title":"Fake news detection in low-resource languages: A novel hybrid summarization approach","authors":["J Alghamdi, Y Lin, S Luo - Knowledge-Based Systems, 2024"],"snippet":"… Pre-training encompasses a vast dataset exceeding 2TB of CommonCrawl data. The overarching concept involves mapping any given language to a language-agnostic vector space, ensuring that inputs in various languages converge to corresponding …","url":["https://www.sciencedirect.com/science/article/pii/S0950705124005185"]} {"year":"2024","title":"Feature-augmented model for multilingual discourse relation classification","authors":["E Metheniti, C Braud, P Muller - Proceedings of the 5th Workshop on Computational …, 2024"],"snippet":"Discourse relation classification within a multilingual, cross-framework setting is a challenging task, and the best-performing systems so far have relied on monolingual and mono-framework approaches. In this paper, we introduce transformer-based …","url":["https://aclanthology.org/2024.codi-1.9.pdf"]} {"year":"2024","title":"Federated learning for privacy-preserving depression detection with multilingual language models in social media posts","authors":["SS Khalil, NS Tawfik, M Spruit - Patterns, 2024"],"snippet":"The incidences of mental health illnesses, such as suicidal ideation and depression, are increasing, which highlights the urgent need for early detection methods. There is a growing interest in using natural language processing (NLP) models to analyze …","url":["https://www.cell.com/patterns/fulltext/S2666-3899(24)00105-3"]} {"year":"2024","title":"Few shot clinical entity recognition in three languages: Masked language models outperform LLM prompting","authors":["M Naguib, X Tannier, A Névéol - arXiv preprint arXiv:2402.12801, 2024"],"snippet":"Large Language Models are becoming the go-to solution for many natural language processing tasks, including in specialized domains where their few-shot capacities are expected to yield high performance in low-resource settings. Herein, we aim to …","url":["https://arxiv.org/pdf/2402.12801"]} {"year":"2024","title":"Final Thoughts: Digital Humanities Looking at Generative AI","authors":["M Aguiar, S Araújo - Digital Humanities Looking at the World: Exploring …, 2024"],"snippet":"In this chapter, we examine generative artificial intelligence in the context of Digital Humanities. We commence by providing a concise overview of this technology and its prevalent models. Following that, we provide a brief survey of the existing …","url":["https://link.springer.com/chapter/10.1007/978-3-031-48941-9_28"]} {"year":"2024","title":"Final Words","authors":["M McTear, M Ashurkina - Transforming Conversational AI: Exploring the Power …, 2024"],"snippet":"Conversational AI is a dynamic and fast moving field, and a lot has happened in the six months or so since we began writing this book. We have tried to ensure that what we have covered in the preceding chapters provides a sufficiently general and …","url":["https://link.springer.com/chapter/10.1007/979-8-8688-0110-5_10"]} {"year":"2024","title":"Fine grain emotion analysis in Spanish using linguistic features and transformers","authors":["A Salmerón-Ríos, JA García-Díaz, R Pan… - PeerJ Computer Science, 2024"],"snippet":"Mental health issues are a global concern, with a particular focus on the rise of depression. Depression affects millions of people worldwide and is a leading cause of suicide, particularly among young people. Recent surveys indicate an increase in …","url":["https://peerj.com/articles/cs-1992/"]} {"year":"2024","title":"Fine Tuning LLM for Enterprise: Practical Guidelines and Recommendations","authors":["K VM, H Warrier, Y Gupta - arXiv preprint arXiv:2404.10779, 2024"],"snippet":"… The datasets are massive text corpus like Common crawl, The Pile and code repositories from Github. Further the model can be fine tuned for specific tasks using specialised datasets. One such series of datasets are called Instruction datasets like …","url":["https://arxiv.org/pdf/2404.10779"]} {"year":"2024","title":"Fine-Tuned Large Language Models for Symptom Recognition from Spanish Clinical Text","authors":["MA Shaaban, A Akkasi, A Khan, M Komeili, M Yaqub - arXiv preprint arXiv …, 2024"],"snippet":"… XLM-RL (561M parameters) and XLM-RB (279M parameters) are multilingual masked language models, trained on a filtered CommonCrawl dataset of over two terabytes. We leverage BBS and BBES both pretrained on Spanish clinical data …","url":["https://arxiv.org/pdf/2401.15780"]} {"year":"2024","title":"Fine-tuning BERT-based Models for Negative Content Identification on Indonesian Tweets","authors":["AF Hidayatullah, K Kalinaki, MM Aslam, RY Zakari…"],"snippet":"Social media platforms like Twitter have become substantial sources of user-generated content, enabling people to easily express their emotions and opinions. However, this freedom has increased the spread of harmful content, such as abusive language …","url":["https://www.researchgate.net/profile/Wasswa-Shafik-2/publication/378272650_Fine-Tuning_BERT-Based_Models_for_Negative_Content_Identification_on_Indonesian_Tweets/links/65d17d8928b7720cecda923e/Fine-Tuning-BERT-Based-Models-for-Negative-Content-Identification-on-Indonesian-Tweets.pdf"]} {"year":"2024","title":"Fine-tuning Large Language Models for Multigenerator, Multidomain, and Multilingual Machine-Generated Text Detection","authors":["F Xiong, T Markchom, Z Zheng, S Jung, V Ojha… - arXiv preprint arXiv …, 2024"],"snippet":"SemEval-2024 Task 8 introduces the challenge of identifying machine-generated texts from diverse Large Language Models (LLMs) in various languages and domains. The task comprises three subtasks: binary classification in monolingual …","url":["https://arxiv.org/pdf/2401.12326"]} {"year":"2024","title":"FINETUNING COMMERCIAL LARGE LANGUAGE MODELS WITH LORA FOR ENHANCED ITALIAN LANGUAGE UNDERSTANDING","authors":["J Hartsuiker, P Torroni, AE Ziri, DF Alise, F Ruggeri"],"snippet":"The world of Artificial intelligence is booming, especially the fields of natural language processing (NLP) and Computer Vision (CV). Recently the chatGPT models from OpenAI have caused another shock-wave. This was a clear sign to the …","url":["https://amslaurea.unibo.it/30534/1/Hartsuiker,%20J.M.%20Master%20Thesis%20-%20Finetuning%20commercial%20Large%20Language%20Models%20with%20LoRA%20for%20enhanced%20Italian%20language%20understanding.pdf"]} {"year":"2024","title":"Finite State Automata on Multi-Word Units for Efficient Text-Mining","authors":["A Postiglione - Mathematics, 2024"],"snippet":"Text mining is crucial for analyzing unstructured and semi-structured textual documents. This paper introduces a fast and precise text mining method based on a finite automaton to extract knowledge domains. Unlike simple words, multi-word …","url":["https://www.mdpi.com/2227-7390/12/4/506/pdf"]} {"year":"2024","title":"FinTral: A Family of GPT-4 Level Multimodal Financial Large Language Models","authors":["G Bhatia, EMB Nagoudi, H Cavusoglu… - arXiv preprint arXiv …, 2024"],"snippet":"We introduce FinTral, a suite of state-of-the-art multimodal large language models (LLMs) built upon the Mistral-7b model and tailored for financial analysis. FinTral integrates textual, numerical, tabular, and image data. We enhance FinTral with domain-specific …","url":["https://arxiv.org/pdf/2402.10986"]} {"year":"2024","title":"Foregrounding Artist Opinions: A Survey Study on Transparency, Ownership, and Fairness in AI Generative Art","authors":["J Lovato, J Zimmerman, I Smith, P Dodds, J Karson - arXiv preprint arXiv:2401.15497, 2024"],"snippet":"Generative Artificial Intelligence (AI) tools are used to create art-like outputs and aid in the creative process. While these tools have potential benefits for artists, they also have the potential to harm the art workforce and infringe upon artistic and intellectual …","url":["https://arxiv.org/pdf/2401.15497"]} {"year":"2024","title":"Fostering the Ecosystem of Open Neural Encoders for Portuguese with Albertina PT* Family","authors":["R Santos, J Rodrigues, L Gomes, J Silva, A Branco… - arXiv preprint arXiv …, 2024"],"snippet":"… The OSCAR subset for Portuguese we use is based on November/December 2022 version of Common Crawl, which is an automatic crawl from the web. Despite being a crawl, the final dataset is of relatively good quality due the filtering performed …","url":["https://arxiv.org/pdf/2403.01897"]} {"year":"2024","title":"Frequently asked questions on erectile dysfunction: evaluating artificial intelligence answers with expert mentorship","authors":["M Baturu, M Solakhan, TG Kazaz, O Bayrak - International Journal of Impotence …, 2024"],"snippet":"The present study assessed the accuracy of artificiaI intelligence-generated responses to frequently asked questions on erectile dysfunction. A cross-sectional analysis involved 56 erectile dysfunction-related questions searched on Google …","url":["https://www.nature.com/articles/s41443-024-00898-3"]} {"year":"2024","title":"From Bytes to Biases: Investigating the Cultural Self-Perception of Large Language Models","authors":["W Messner, T Greene, J Matalone - arXiv preprint arXiv:2312.17256, 2023"],"snippet":"Large language models (LLMs) are able to engage in natural-sounding conversations with humans, showcasing unprecedented capabilities for information retrieval and automated decision support. They have disrupted human-technology …","url":["https://arxiv.org/pdf/2312.17256"]} {"year":"2024","title":"From Cosine Similarity to Likelihood Ratio: Coupling Representations From Machine Learning (and Other Sources) With Cognitive Models","authors":["GE Cox"],"snippet":"Modern machine learning models yield vector representations that capture similarity relations between complex items like text and images. These representations can help explain and predict how individuals respond to those items in particular tasks …","url":["https://osf.io/v7xuz/download"]} {"year":"2024","title":"From Data Creator to Data Reuser: Distance Matters","authors":["CL Borgman, PT Groth - arXiv preprint arXiv:2402.07926, 2024"],"snippet":"… Common Crawl, launched in 2007, now spends roughly $200,000 per year to maintain petabytes of openly available web crawl data (Common Crawl, 2023; Roberts, 2023). The use of these data to train large language models has …","url":["https://arxiv.org/pdf/2402.07926"]} {"year":"2024","title":"From Data Quality to Model Performance: Navigating the Landscape of Deep Learning Model Evaluation","authors":["M Akram, WH Moosa - Deep Learning for Multimedia Processing Applications, 2024"],"snippet":"Deep learning (DL) technologies have revolutionized the analysis of multimedia data, with natural language processing, visual data analytics, and audio recognition being just a few examples of multimedia applications that have effectively utilized DL. This …","url":["https://www.taylorfrancis.com/chapters/edit/10.1201/9781032646268-13/data-quality-model-performance-muhammad-akram-wajid-hassan-moosa-najiba"]} {"year":"2024","title":"From Development to Dissemination: Social and Ethical Issues with Text-to-Image AI-Generated Art","authors":["SCY Ho"],"snippet":"… Model developed by the Computer Vision & Learning Group (CompVis) lab at Ludwig Maximilian University of Munich was trained with a subset of the LAION-5B dataset [6], comprising of about 2.3 billion CLIP-filtered image-text pairs by parsing …","url":["https://assets.pubpub.org/sv8awfz7/21682640391665.pdf"]} {"year":"2024","title":"From Form (s) to Meaning: Probing the Semantic Depths of Language Models Using Multisense Consistency","authors":["X Ohmer, E Bruni, D Hupkes - arXiv preprint arXiv:2404.12145, 2024"],"snippet":"… We use the current common crawl statistics6 to compute an estimate of how low- or high-resource these languages are in web-based corpora. Of this corpus, English constitutes 46% of the data, German 5.8%, Italian 2.7%, Dutch 2.2%, and Swedish …","url":["https://arxiv.org/pdf/2404.12145"]} {"year":"2024","title":"From News to Summaries: Building a Hungarian Corpus for Extractive and Abstractive Summarization","authors":["B Barta, D Lakatos, A Nagy, MK Nyist, J Ács - arXiv preprint arXiv:2404.03555, 2024"],"snippet":"… We use the freely available Common Crawl dataset4 as a basis for constructing the corpus. It contains petabytes of crawled web pages from the past 25 years and it is available on Amazon S3 in WARC format. Retrieval and deduplication of the raw …","url":["https://arxiv.org/pdf/2404.03555"]} {"year":"2024","title":"From Text to Transformation: A Comprehensive Review of Large Language Models' Versatility","authors":["P Kaur, GS Kashyap, A Kumar, MT Nafis, S Kumar… - arXiv preprint arXiv …, 2024"],"snippet":"This groundbreaking study explores the expanse of Large Language Models (LLMs), such as Generative Pre-Trained Transformer (GPT) and Bidirectional Encoder Representations from Transformers (BERT) across varied domains ranging from …","url":["https://arxiv.org/pdf/2402.16142"]} {"year":"2024","title":"Frozen or Fine-tuned? Analyzing Deep Learning Models and Training Strategies for Optimizing Big Five Personality Traits Prediction from Text","authors":["M Soleimani, HB Kashani - 2024 10th International Conference on Artificial …, 2024"],"snippet":"In today’s digital age, the comprehension and prediction of human personality traits have assumed paramount significance. This study embarks on the task of forecasting the Big Five personality traits through textual data, harnessing the …","url":["https://ieeexplore.ieee.org/abstract/document/10496606/"]} {"year":"2024","title":"Fundus: A Simple-to-Use News Scraper Optimized for High Quality Extractions","authors":["M Dallabetta, C Dobberstein, A Breiding, A Akbik - arXiv preprint arXiv:2403.15279, 2024"],"snippet":"… Specifically, we support the CC-NEWS3 dataset provided by the COMMONCRAWL initiative. At the time of writing, this dataset comprises around 40 terabytes of WARC-formatted data, containing millions of news articles dating back …","url":["https://arxiv.org/pdf/2403.15279"]} {"year":"2024","title":"Fusing Domain-Specific Content from Large Language Models into Knowledge Graphs for Enhanced Zero Shot Object State Classification","authors":["F Gouidis, K Papantoniou, KPT Patkos, A Argyros… - arXiv preprint arXiv …, 2024"],"snippet":"Domain-specific knowledge can significantly contribute to addressing a wide variety of vision tasks. However, the generation of such knowledge entails considerable human labor and time costs. This study investigates the potential of Large Language …","url":["https://arxiv.org/pdf/2403.12151"]} {"year":"2024","title":"Fusion of Deep Learning with Advanced and Traditional Embeddings in Sentiment Analysis","authors":["K Kiran, PD Shenoy - 2024 IEEE 9th International Conference for …, 2024"],"snippet":"In todays changing landscape the abundance of unprocessed data plays a vital role. It is crucial to curate this dataset and conduct a thorough analysis of the sentiments it captures. The aim of this research study is to explore learning techniques such as Bi …","url":["https://ieeexplore.ieee.org/abstract/document/10543279/"]} {"year":"2024","title":"GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection","authors":["J Zhao, Z Zhang, B Chen, Z Wang, A Anandkumar… - arXiv preprint arXiv …, 2024"],"snippet":"Training Large Language Models (LLMs) presents significant memory challenges, predominantly due to the growing size of weights and optimizer states. Common memory-reduction approaches, such as low-rank adaptation (LoRA), add a trainable …","url":["https://arxiv.org/html/2403.03507v1"]} {"year":"2024","title":"Gauging Novelty in Crowdfunding Projects: A Theory-Driven Text Analysis Approach","authors":["Y Lin, W Boh - 2024"],"snippet":"The explosion of crowdfunding projects has called for new ways to screen for novel projects. Building on the multidimensional nature of novelty, we draw on the literature to propose two measures of project novelty that are theoretically sound and …","url":["https://aisel.aisnet.org/pacis2024/track03_ba/track03_ba/1/"]} {"year":"2024","title":"GenDec: A robust generative Question-decomposition method for Multi-hop reasoning","authors":["J Wu, L Yang, Y Ji, W Huang, BF Karlsson, M Okumura - arXiv preprint arXiv …, 2024"],"snippet":"Multi-hop QA (MHQA) involves step-by-step reasoning to answer complex questions and find multiple relevant supporting facts. However, Existing large language models'(LLMs) reasoning ability in multi-hop question answering remains exploration, which is …","url":["https://arxiv.org/pdf/2402.11166"]} {"year":"2024","title":"Gender Bias in Natural Gender Language and Grammatical Gender Language within Children's Literature","authors":["KMW Smolinski - 2024"],"snippet":"There has been much research on the connection between language and gender bias but there is little comparing natural gender language, grammatical gender language, and gender bias. This research is important because it can offer an …","url":["https://digitalcommons.liberty.edu/cgi/viewcontent.cgi?article=6353&context=doctoral"]} {"year":"2024","title":"Gender Identity and Representation in the Context of Economic Development in India","authors":["B Pandey - 2024"],"snippet":"… It is a large multi-lingual language model, trained on 2.5TB of filtered CommonCrawl data. It represents some of the most sophisticated pre-trained models prevalently being used in natural language processing for text classification …","url":["https://knowledge.uchicago.edu/record/11912/files/Pandey_Thesis_MACSSEcon.pdf"]} {"year":"2024","title":"Gendered Grammar or Ingrained Bias? Exploring Gender Bias in Icelandic Language Models","authors":["SR Friðriksdóttir, H Einarsson - Proceedings of the 2024 Joint International …, 2024"],"snippet":"Large language models, trained on vast datasets, exhibit increased output quality in proportion to the amount of data that is used to train them. This data-driven learning process has brought forth a pressing issue where these models may not only reflect …","url":["https://aclanthology.org/2024.lrec-main.671.pdf"]} {"year":"2024","title":"Generating Apparel Images by Using Stable Diffusion with Prompt Recommendation","authors":["XZ Liu, HM Hung, LH Chen - 人工知能学会全国大会論文集 第 38 回 (2024), 2024"],"snippet":"Fashion is a profound expression of personal identity, style, and culture in daily life. Making oneself look attractive, which often equates to being fashionable or stylish, is always a priority for many people. In this paper, we proposed a clothing …","url":["https://www.jstage.jst.go.jp/article/pjsai/JSAI2024/0/JSAI2024_2Q4IS502/_pdf"]} {"year":"2024","title":"Generating Character Lines in Four-Panel Manga","authors":["M Inaba - Proceedings of the 37th Pacific Asia Conference on …, 2023"],"snippet":"Automatic content generation based on natural language processing is an active research area, especially for story generation. Research on story generation has focused on generating consistent text pertaining to characters’ actions and events; …","url":["https://aclanthology.org/2023.paclic-1.34.pdf"]} {"year":"2024","title":"Generating Probabilistic Scenario Programs from Natural Language","authors":["K Elmaaroufi, D Shankar, A Cismaru… - arXiv preprint arXiv …, 2024"],"snippet":"… Most of these model’s are fine-tuned variants of a foundational model that was trained on a wider body of text such as Common Crawl. For instance Codex is ”a GPT language model fine-tuned on publicly available code from GitHub” and it’s …","url":["https://arxiv.org/pdf/2405.03709"]} {"year":"2024","title":"Generation of API Documentation using Large Language Models–Towards Self-explaining APIs","authors":["Y Jorelle - 2024"],"snippet":"Application Programming Interfaces (APIs) play a crucial role in modern software development. They are key in not only in enabling software reuse and adaptation, which greatly increase productivity, but also in facilitating exploration of new …","url":["https://aaltodoc.aalto.fi/bitstreams/b56655a0-32c3-4c19-a816-5bddebf01a6e/download"]} {"year":"2024","title":"Generative adversarial network-based phishing URL detection with variational autoencoder and transformer","authors":["JK Sasi, A Balakrishnan - Int J Artif Intell, 2024"],"snippet":"Phishing attacks pose a constant threat to online security, necessitating the development of efficient tools for identifying malicious URLs. In this article, we propose a novel approach to detect phishing URLs employing a generative …","url":["https://www.researchgate.net/profile/Jishnu-K-S/publication/380184574_Generative_adversarial_network-based_phishing_URL_detection_with_variational_autoencoder_and_transformer/links/66308f2b06ea3d0b7419acb6/Generative-adversarial-network-based-phishing-URL-detection-with-variational-autoencoder-and-transformer.pdf"]} {"year":"2024","title":"Generative AI and Large Language Models for Cyber Security: All Insights You Need","authors":["MA Ferrag, F Alwahedi, A Battah, B Cherif, A Mechri… - arXiv preprint arXiv …, 2024"],"snippet":"This paper provides a comprehensive review of the future of cybersecurity through Generative AI and Large Language Models (LLMs). We explore LLM applications across various domains, including hardware design security, intrusion detection …","url":["https://arxiv.org/pdf/2405.12750"]} {"year":"2024","title":"Generative AI and the Continuing Importance of Information Literacy","authors":["L Svoboda, J Dean - 2024"],"snippet":"GAI models are embedded in a variety of applications. These models are trained using existing information, which is then formatted and delivered within the context of the GAI application. Bias, misand disinformation, and ethical use of information …","url":["https://deepblue.lib.umich.edu/bitstream/handle/2027.42/192763/DeanSvoboda-GenAI-InfoLit-Final.pdf?sequence=1"]} {"year":"2024","title":"Generative AI for Math: Part I--MathPile: A Billion-Token-Scale Pretraining Corpus for Math","authors":["Z Wang, R Xia, P Liu - arXiv preprint arXiv:2312.17120, 2023"],"snippet":"… expanse of Common Crawl snapshots, a venture we aim to pursue in our future work. … We provide a near-duplicate example found in Common Crawl, as shown in 2 (See Table 8 to … StackExchange that appeared in a document from Common …","url":["https://arxiv.org/pdf/2312.17120"]} {"year":"2024","title":"Generative AI for Unmanned Vehicle Swarms: Challenges, Applications and Opportunities","authors":["G Liu, N Van Huynh, H Du, DT Hoang, D Niyato, K Zhu… - arXiv preprint arXiv …, 2024"],"snippet":"With recent advances in artificial intelligence (AI) and robotics, unmanned vehicle swarms have received great attention from both academia and industry due to their potential to provide services that are difficult and dangerous to perform by humans …","url":["https://arxiv.org/pdf/2402.18062"]} {"year":"2024","title":"Generative AI from Theory to Practice: A Case Study of Financial Advice","authors":["AW Lo, J Ross - 2024"],"snippet":"We identify some of the most pressing issues facing the adoption of large language models (LLMs) in practical settings and propose a research agenda to reach the next technological inflection point in generative AI. We focus on three challenges …","url":["https://mit-genai.pubpub.org/pub/l89uu140"]} {"year":"2024","title":"Generative AI in Education From the Perspective of Students, Educators, and Administrators","authors":["A Ghimire - 2024"],"snippet":"This research explores how advanced artificial intelligence (AI), like the technology that powers tools such as ChatGPT, is changing the way we teach and learn in schools and universities. Imagine AI helping to summarize thick legal documents …","url":["https://digitalcommons.usu.edu/cgi/viewcontent.cgi?article=1119&context=etd2023"]} {"year":"2024","title":"Generative AI","authors":["WR Business, T Taulli"],"snippet":"… • CommonCrawl News: This is a dataset of news stories from websites across the globe. …","url":["https://link.springer.com/content/pdf/10.1007/978-1-4842-9367-6.pdf"]} {"year":"2024","title":"Generative AI, Plagiarism, and Copyright Infringement in Legal Documents","authors":["AB Cyphert - Minnesota Journal of Law, Science & Technology, 2024"],"snippet":"… OpenAI acknowledged in the GPT-3 research paper that GPT-3 was trained on the Common Crawl dataset, “which includes everything from traditional news sites like the New York Times to sites like Reddit.” Cyphert, supra note 1, at 407, citing Liz …","url":["https://scholarship.law.umn.edu/cgi/viewcontent.cgi?article=1564&context=mjlst"]} {"year":"2024","title":"Generative AI-Based Text Generation Methods Using Pre-Trained GPT-2 Model","authors":["R Pandey, H Waghela, S Rakshit, A Rangari, A Singh… - arXiv preprint arXiv …, 2024"],"snippet":"This work delved into the realm of automatic text generation, exploring a variety of techniques ranging from traditional deterministic approaches to more modern stochastic methods. Through analysis of greedy search, beam search, top-k …","url":["https://arxiv.org/pdf/2404.01786"]} {"year":"2024","title":"Generative Artificial Intelligence: Models, Benefits, Dangers and Detection of AI-Generated Text on Specialized Domains","authors":["IN Mitrou - 2024"],"snippet":"… However, Radford et al. found that a large amount of document within the hashed pages of Common Crawl was unintelligible. A new approach could be to simply use a curated subset of Common Crawl, but they also created a web scrape from Reddit …","url":["https://pergamos.lib.uoa.gr/uoa/dl/object/3393769/file.pdf"]} {"year":"2024","title":"Generative Knowledge Management for Financial Inclusion Through Financial Literacy: A Systematic Review.","authors":["S Vadari, C Malladi - IUP Journal of Knowledge Management, 2024"],"snippet":"… OpenAI utilized an openly available corpus of webpages called Common Crawl, which in turn derives from a bot that crawls the entire Internet. This Common Crawl dataset contains billions of webpages and is one of the colossal textual datasets …","url":["https://search.ebscohost.com/login.aspx?direct=true&profile=ehost&scope=site&authtype=crawler&jrnl=25834592&AN=175575074&h=sCel50fK50xEllhIJpNI7DkOR%2BVRtNQcL6GAPBohcUUqh69TZ2AXa6cLuU8G8cSdXlqrcR2ViJzZZ2Q1%2FABnIQ%3D%3D&crl=c"]} {"year":"2024","title":"GeniL: A Multilingual Dataset on Generalizing Language","authors":["AM Davani, S Gubbi, S Dev, S Dave, V Prabhakaran - arXiv preprint arXiv …, 2024"],"snippet":"… For each language, we queried the Multilingual Common Crawl (mC4) language corpus to collect naturally occurring sentences that contain both the terms in the tuples present in SGM. To ensure a diverse representation of different associations …","url":["https://arxiv.org/pdf/2404.05866"]} {"year":"2024","title":"German Text Embedding Clustering Benchmark","authors":["S Wehrli, B Arnrich, C Irrgang - arXiv preprint arXiv:2401.02709, 2024"],"snippet":"This work introduces a benchmark assessing the performance of clustering German text embeddings in different domains. This benchmark is driven by the increasing use of clustering neural text embeddings in tasks that require the grouping of texts (such …","url":["https://arxiv.org/pdf/2401.02709"]} {"year":"2024","title":"GFWeb: Measuring the Great Firewall's Web Censorship at Scale","authors":["NP Hoang, J Dalek, M Crete-Nishihata, N Christin…"],"snippet":"… Aggregating the ranking information provided by the Tranco list [62] and the Common Crawl dataset [1], we analyze the popularity of the base censored domains discovered by GFWeb for HTTP(S) filters in comparison to the GFWatch (DNS filter) …","url":["https://www3.cs.stonybrook.edu/~mikepo/papers/gfweb.sec24.pdf"]} {"year":"2024","title":"Ghost Sentence: A Tool for Everyday Users to Copyright Data from Large Language Models","authors":["S Zhao, L Zhu, R Quan, Y Yang - arXiv preprint arXiv:2403.15740, 2024"],"snippet":"Web user data plays a central role in the ecosystem of pre-trained large language models (LLMs) and their fine-tuned variants. Billions of data are crawled from the web and fed to LLMs. How can \\textit{\\textbf{everyday web users}} confirm if LLMs …","url":["https://arxiv.org/html/2403.15740v1"]} {"year":"2024","title":"GPTs: Concerns, Limitations and (Some) Responses","authors":["M Frické"],"snippet":"… The ones of these that are large and containing substantial source material from the Internet (eg from Common Crawl) will have some harms of representation bias. Then, if the training data has bias then so too will LLMs. To address this, there seem …","url":["https://ischool.arizona.edu/sites/ischool.arizona.edu/files/2024-04/Fricke%CC%81%20Information%20on%20Tap.pdf"]} {"year":"2024","title":"Grammatical versus Spelling Error Correction: An Investigation into the Responsiveness of Transformer-Based Language Models Using BART and MarianMT","authors":["R Raju, PB Pati, SA Gandheesh, GS Sannala… - Journal of Information & …, 2024"],"snippet":"Text continues to remain a relevant form of representation for information. Text documents are created either in digital native platforms or through the conversion of other media files such as images and speech. While the digital native text is …","url":["https://www.worldscientific.com/doi/abs/10.1142/S0219649224500370"]} {"year":"2024","title":"Grammatical vs Spelling Error Correction: An Investigation into the Responsiveness of Transformer-based Language Models using BART and MarianMT","authors":["R Raju, PB Pati, SA Gandheesh, GS Sannala, S KS - arXiv preprint arXiv:2403.16655, 2024"],"snippet":"… C4 Dataset is an Open-Source dataset obtained with Common Crawl web scrape [30]. This dataset contains millions of collected sentences along with their target sentences. The collected statements, referred to as input sentences in this work …","url":["https://arxiv.org/pdf/2403.16655"]} {"year":"2024","title":"Granite Code Models: A Family of Open Foundation Models for Code Intelligence","authors":["M Mishra, M Stallone, G Zhang, Y Shen, A Prasad… - arXiv preprint arXiv …, 2024"],"snippet":"Large Language Models (LLMs) trained on code are revolutionizing the software development process. Increasingly, code LLMs are being integrated into software development environments to improve the productivity of human programmers, and …","url":["https://arxiv.org/pdf/2405.04324"]} {"year":"2024","title":"Hallucinating (or poorly fed) LLMs? The problem of data accuracy","authors":["E Stringhi - i-lex, 2023"],"snippet":"… This is the case of “Common Crawl”, which consists in a massive collection of web pages and … To give a picture, the Common Crawl dataset constitutes nearly a trillion words. … were publicly available online and memorised in the Common …","url":["https://i-lex.unibo.it/article/download/18877/17434"]} {"year":"2024","title":"Harder Tasks Need More Experts: Dynamic Routing in MoE Models","authors":["Q Huang, Z An, N Zhuang, M Tao, C Zhang, Y Jin, K Xu… - arXiv preprint arXiv …, 2024"],"snippet":"In this paper, we introduce a novel dynamic expert selection framework for Mixture of Experts (MoE) models, aiming to enhance computational efficiency and model performance by adjusting the number of activated experts based on input difficulty …","url":["https://arxiv.org/html/2403.07652v1"]} {"year":"2024","title":"Hardware Phi-1.5 B: A Large Language Model Encodes Hardware Domain Specific Knowledge","authors":["W Fu, S Li, Y Zhao, H Ma, R Dutta, X Zhang, K Yang…"],"snippet":"… The first segment includes data from specialized platforms such as Arxiv , Books, Wikipedia, and StackExchange; the second segment is derived from broader internet content via CommonCrawl and C4. Table II provides a comprehensive …","url":["https://ece.k-state.edu/research/hardware-security/papers/ASP_DAC_PreTrainLM_2024_CR.pdf"]} {"year":"2024","title":"Harmony in the Australian Domain Space","authors":["X Gong, PX McCarthy, MA Rizoiu, P Boldi - arXiv preprint arXiv:2404.10006, 2024"],"snippet":"… Though not focused on localized networks, the extensive and all-encompassing web graph from Common Crawl utilized in our study enables the extraction of implicit information on a broader scale. … We utilize the datasets mainly from …","url":["https://arxiv.org/pdf/2404.10006"]} {"year":"2024","title":"Harnessing Artificial Intelligence in Bariatric Surgery: Comparative Analysis of ChatGPT-4, Bing, and Bard in Generating Clinician-Level Bariatric Surgery …","authors":["Y Lee, T Shin, L Tessier, A Javidan, J Jung, D Hong… - Surgery for Obesity and Related …"],"snippet":"Background The formulation of clinical recommendations pertaining to bariatric surgery is essential in guiding healthcare professionals. However, the extensive and continuously evolving body of literature in bariatric surgery presents considerable …","url":["https://www.soard.org/article/S1550-7289(24)00118-7/fulltext"]} {"year":"2024","title":"Harnessing generative AI for overcoming labeled data challenges in social media NLP","authors":["CR Liyanage - 2023"],"snippet":"With the introduction of Transformers and Large Language Models, the field of NLP has significantly evolved. Generative AI, a prominent transformer-based technology for crafting human-like content, has proven powerful skills across numerous NLP …","url":["https://knowledgecommons.lakeheadu.ca/bitstream/handle/2453/5275/LiyanageC2023m-1a.pdf?sequence=1&isAllowed=y"]} {"year":"2024","title":"Harnessing the Power of Metadata for Enhanced Question Retrieval in Community Question Answering","authors":["S Ghasemi, A Shakery - IEEE Access, 2024"],"snippet":"… GPT-3 was trained on an open source dataset called ‘CommonCrawl’ and other texts from OpenAI such as Wikipedia entries. GPT-4 is the newest version of OpenAI’s language model systems. In this article, we fine-tune GPT-2 and GPT-3 pretrained …","url":["https://ieeexplore.ieee.org/iel7/6287639/10380310/10525684.pdf"]} {"year":"2024","title":"HCI International 2024 Posters: 26th International Conference on Human-Computer Interaction, HCII 2024, Washington, DC, USA, June 29–July 4, 2024, Proceedings …","authors":["C Stephanidis - 2024"],"snippet":"Preliminary scientific results, professional news, or work in progress, described in the form of short research papers (4–11 pages long), constitute a popular submission type among the International Conference on Human-Computer …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=aMULEQAAQBAJ&oi=fnd&pg=PR5&dq=commoncrawl&ots=hRNpPdcSEp&sig=R0smc7pGlEGxpECYA1HDV-wXEBA"]} {"year":"2024","title":"HGT: Leveraging Heterogeneous Graph-enhanced Large Language Models for Few-shot Complex Table Understanding","authors":["R Jin, Y Li, G Qi, N Hu, YF Li, J Chen, J Wang, Y Chen… - arXiv preprint arXiv …, 2024"],"snippet":"Table understanding (TU) has achieved promising advancements, but it faces the challenges of the scarcity of manually labeled tables and the presence of complex table structures.To address these challenges, we propose HGT, a framework with a …","url":["https://arxiv.org/pdf/2403.19723"]} {"year":"2024","title":"Hierarchical Multimodal Pre-training for Visually Rich Webpage Understanding","authors":["H Xu, L Chen, Z Zhao, D Ma, R Cao, Z Zhu, K Yu - arXiv preprint arXiv:2402.18262, 2024"],"snippet":"… Common Crawl is a publicly available web crawl dataset that collects webpages from the internet. Instead of using the source code, we collect webpage links from Common Crawl. We traverse all links in a dataset snapshot 4 and categorize and …","url":["https://arxiv.org/pdf/2402.18262"]} {"year":"2024","title":"Higher Education's Generative Artificial Intelligence Paradox: The Meaning of Chatbot Mania","authors":["J Rudolph, MF Ismail, S Popenici - Journal of University Teaching and Learning …, 2024"],"snippet":"Higher education is currently under a significant transformation due to the emergence of generative artificial intelligence (GenAI) technologies, the hype surrounding GenAI and the increasing influence of educational technology business …","url":["https://www.researchgate.net/profile/Stefan-Popenici/publication/379956424_Higher_Education's_Generative_Artificial_Intelligence_Paradox_The_Meaning_of_Chatbot_Mania/links/66237af9f7d3fc2874703875/Higher-Educations-Generative-Artificial-Intelligence-Paradox-The-Meaning-of-Chatbot-Mania.pdf"]} {"year":"2024","title":"Historical Dutch Spelling Normalization with Pretrained Language Models","authors":["A Wolters, A Van Cranenburgh - Computational Linguistics in the Netherlands …, 2024"],"snippet":"… These texts are from the common crawl corpus, which contains petabytes of data scraped from the internet. Further pretraining on in-domain data can make the model more suited for the downstream task it will be finetuned for, and since our task …","url":["https://www.clinjournal.org/clinj/article/download/178/194"]} {"year":"2024","title":"History, Development, and Principles of Large Language Models-An Introductory Survey","authors":["Z Chu, S Ni, Z Wang, X Feng, C Li, X Hu, R Xu, M Yang… - arXiv preprint arXiv …, 2024"],"snippet":"Language models serve as a cornerstone in natural language processing (NLP), utilizing mathematical methods to generalize language laws and knowledge for prediction and generation. Over extensive research spanning decades, language …","url":["https://arxiv.org/pdf/2402.06853"]} {"year":"2024","title":"Holographic Embeddings for Text and Graphs A Master's Thesis Presented to The Faculty of the Graduate School of Arts and Sciences","authors":["T Obiso - 2024"],"snippet":"… Static word embeddings such as word2vec, Global Vectors for Word Representation (GloVe), and fastText are trained on large amounts of natural language text sources like Common Crawl and Wikipedia dumps. These models …","url":["https://scholarworks.brandeis.edu/view/pdfCoverPage?instCode=01BRAND_INST&filePid=13511842530001921&download=true"]} {"year":"2024","title":"Homonym Sense Disambiguation in the Georgian Language","authors":["D Melikidze, A Gamkrelidze - arXiv preprint arXiv:2405.00710, 2024"],"snippet":"This research proposes a novel approach to the Word Sense Disambiguation (WSD) task in the Georgian language, based on supervised fine-tuning of a pre-trained Large Language Model (LLM) on a dataset formed by filtering the Georgian …","url":["https://arxiv.org/pdf/2405.00710"]} {"year":"2024","title":"How Do We Learn What We Cannot Say?","authors":["D Yakubov - 2024"],"snippet":"The contributions of this thesis are two-fold. First, this thesis presents UDTube, an easily usable software developed to perform morphological analysis in a multi-task fashion. This work shows the strong performance of UDTube versus the current state-of-the-art …","url":["https://academicworks.cuny.edu/cgi/viewcontent.cgi?article=6726&context=gc_etds"]} {"year":"2024","title":"How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites","authors":["Z Chen, W Wang, H Tian, S Ye, Z Gao, E Cui, W Tong… - arXiv preprint arXiv …, 2024"],"snippet":"In this report, we introduce InternVL 1.5, an open-source multimodal large language model (MLLM) to bridge the capability gap between open-source and proprietary commercial models in multimodal understanding. We introduce three simple …","url":["https://arxiv.org/pdf/2404.16821"]} {"year":"2024","title":"How Gender Interacts with Political Values: A Case Study on Czech BERT Models","authors":["AA Ali, J Libovický - arXiv preprint arXiv:2403.13514, 2024"],"snippet":"Neural language models, which reach state-of-the-art results on most natural language processing tasks, are trained on large text corpora that inevitably contain value-burdened content and often capture undesirable biases, which the models …","url":["https://arxiv.org/pdf/2403.13514"]} {"year":"2024","title":"How Large Corpora Sizes Influence the Distribution of Low Frequency Text n-grams","authors":["JF Silva, JC Cunha - Pacific-Asia Conference on Knowledge Discovery and …, 2024"],"snippet":"The prediction of the numbers of distinct word n-grams and their frequency distributions in text corpora is important in domains like information processing and language modelling. With big data corpora, there is an increased application …","url":["https://link.springer.com/chapter/10.1007/978-981-97-2259-4_16"]} {"year":"2024","title":"How Lexical is Bilingual Lexicon Induction?","authors":["H Kohli, H Feng, N Dronen, C McCarter, S Moeini… - arXiv preprint arXiv …, 2024"],"snippet":"… derived from Common Crawl and Wikipedia. The plot visualizes the Spearman’s Rank correlation of term frequencies between each of the source (by row) and target (by column) language pairs in the 5k vocabularies in the XLING corpus derived from …","url":["https://arxiv.org/pdf/2404.04221"]} {"year":"2024","title":"How many news websites block AI crawlers?","authors":["R Fletcher - 2024"],"snippet":"In this factsheet we describe the proportion of news websites in ten countries that block AI (artificial intelligence) crawlers. We find that, (i) by the end 2023, 48% of the most widely used news websites across ten countries were blocking OpenAI’s (ChatGPT) …","url":["https://ora.ox.ac.uk/objects/uuid:6b0653e7-4a3b-4448-b0bd-1bdbd55aa61e/files/s3j333388m"]} {"year":"2024","title":"How Much are LLMs Contaminated? A Comprehensive Survey and the LLMSanitize Library","authors":["M Ravaut, B Ding, F Jiao, H Chen, X Li, R Zhao, C Qin… - arXiv preprint arXiv …, 2024"],"snippet":"… This includes policies and protocols for data privacy, consent, and use that help prevent the incorporation of contaminated data from unethical sources and the contamination of widely used pre-training data sources (eg, CommonCrawl) …","url":["https://arxiv.org/pdf/2404.00699"]} {"year":"2024","title":"How to Train Data-Efficient LLMs","authors":["N Sachdeva, B Coleman, WC Kang, J Ni, L Hong… - arXiv preprint arXiv …, 2024"],"snippet":"The training of large language models (LLMs) is expensive. In this paper, we study data-efficient approaches for pre-training LLMs, ie, techniques that aim to optimize the Pareto frontier of model quality and training resource/data consumption. We …","url":["https://arxiv.org/pdf/2402.09668"]} {"year":"2024","title":"HPLT's First Release of Data and Models","authors":["N Arefyev, M Aulamo, P Chen, O de Gibert, B Haddow… - 2024"],"snippet":"… Datasets For the first release, we processed 1.85 petabytes of the Internet Archive and CommonCrawl to create monolingual and parallel corpora. We release them under the permissive CC0 licence1 through our project website2, OPUS3, and Hugging …","url":["https://pinzhenchen.github.io/paper/2024-hplt-project.pdf"]} {"year":"2024","title":"Human-Centered Interaction in Virtual Worlds: A New Era of Generative Artificial Intelligence and Metaverse","authors":["Y Wang, L Wang, KL Siau - International Journal of Human–Computer Interaction, 2024"],"snippet":"The metaverse has emerged as an exciting new paradigm for human-computer interaction (HCI) and virtual collaboration. This paper presents a comprehensive review of the metaverse to address the gap in the existing literature where there is a …","url":["https://www.tandfonline.com/doi/abs/10.1080/10447318.2024.2316376"]} {"year":"2024","title":"Human-Centered Natural Language Processing for Countering Misinformation","authors":["A Kazemi - 2024"],"snippet":"As curbing the spread of online misinformation has proven to be challenging, we look to artificial intelligence (AI) and natural language technology for helping individuals and society counter and limit it. Despite current advances, state-of-the-art …","url":["https://deepblue.lib.umich.edu/bitstream/handle/2027.42/193271/ashkank_1.pdf?sequence=1"]} {"year":"2024","title":"Hybrid semantic models for building smart and robust home robots","authors":["A Pal - 2023"],"snippet":"The creation of home robots that can aid human beings in daily mundane chores has been a long-standing goal of robotics research. Some common indoor tasks that service robots can help with include retrieving objects from different locations …","url":["https://escholarship.org/content/qt92w4x2f1/qt92w4x2f1.pdf"]} {"year":"2024","title":"HYPE: Hyperbolic Entailment Filtering for Underspecified Images and Texts","authors":["W Kim, S Chun, T Kim, D Han, S Yun - arXiv preprint arXiv:2404.17507, 2024"],"snippet":"In an era where the volume of data drives the effectiveness of self-supervised learning, the specificity and clarity of data semantics play a crucial role in model training. Addressing this, we introduce HYPerbolic Entailment filtering (HYPE), a …","url":["https://arxiv.org/pdf/2404.17507"]} {"year":"2024","title":"Hypertext Entity Extraction in Webpage","authors":["Y Yang, T Liu, B Shao, H Zhao, L Shou, M Gong… - arXiv preprint arXiv …, 2024"],"snippet":"… Common Crawl5 is a widely used large-scale dataset for IE. However, they all overlook rich hypertext features and contain too much noise. Moreover, their annotations are often determined by rules or automated tools, lacking accurate manual annotations. …","url":["https://arxiv.org/html/2403.01698v1"]} {"year":"2024","title":"I'm In! A Comparative Analysis of Business Pitch Competition","authors":["E Barron, I Bernatovic, LE Ruiz - DIANA INTERNATIONAL RESEARCH …, 2023"],"snippet":"Methods Based on a perceived theoretical gap in research, entrepreneurship scholars have called for increased research to understand how institutions (including governments and academia), influence the construction of intersectional …","url":["https://pure.udem.edu.mx/files/73675582/Conference_Proceedings_Diana_International_Research_Conference_2023_PDF_.pdf"]} {"year":"2024","title":"Identification of patients' smoking status using an explainable AI approach: a Danish electronic health records case study","authors":["A Ebrahimi, MBH Henriksen, CL Brasen, O Hilberg… - BMC Medical Research …, 2024"],"snippet":"Smoking is a critical risk factor responsible for over eight million annual deaths worldwide. It is essential to obtain information on smoking habits to advance research and implement preventive measures such as screening of high-risk …","url":["https://bmcmedresmethodol.biomedcentral.com/articles/10.1186/s12874-024-02231-4"]} {"year":"2024","title":"iDRAMA-rumble-2024: A Dataset of Podcasts from Rumble Spanning 2020 to 2022","authors":["U Balci, J Patel, B Balci, J Blackburn - 2024"],"snippet":"Rumble has emerged as a prominent platform hosting controversial figures facing restrictions on YouTube. Despite this, the academic community’s engagement with Rumble has been minimal. To help researchers address this gap, we introduce a …","url":["https://workshop-proceedings.icwsm.org/pdf/2024_07.pdf"]} {"year":"2024","title":"IGOT: Information Gain Optimized Tokenizer on Domain Adaptive Pretraining","authors":["D Feng, Y Zhang, Z Xu - arXiv preprint arXiv:2405.09857, 2024"],"snippet":"Pretrained Large Language Models (LLM) such as ChatGPT, Claude, etc. have demonstrated strong capabilities in various fields of natural language generation. However, there are still many problems when using LLM in specialized domain-specific …","url":["https://arxiv.org/pdf/2405.09857"]} {"year":"2024","title":"Impact of COVID-19 Pandemic on Social Determinants of Health Issues of Marginalized Black and Asian Communities: A Social Media Analysis Empowered by …","authors":["C Whitfield, Y Liu, M Anwar - Journal of Racial and Ethnic Health Disparities, 2024"],"snippet":"Purpose This study aims to understand the impact of the COVID-19 pandemic on social determinants of health (SDOH) of marginalized racial/ethnic US population groups, specifically African Americans and Asians, by leveraging natural language …","url":["https://link.springer.com/article/10.1007/s40615-024-01996-0"]} {"year":"2024","title":"Improved methodology for longitudinal Web analytics using Common Crawl","authors":["HS Thompson - 16th ACM Web Science Conference 2024, 2024"],"snippet":"Common Crawl is a multi-petabyte longitudinal dataset containing over 100 billion web pages which is widely used as a source of language data for sequence model training and in web science research. Each of its constituent archives is on the order …","url":["https://www.research.ed.ac.uk/en/publications/improved-methodology-for-longitudinal-web-analytics-using-common-"]} {"year":"2024","title":"Improving Bengali and Hindi Large Language Models","authors":["A Shahriar, D Barbosa - Proceedings of the 2024 Joint International Conference …, 2024"],"snippet":"Despite being widely spoken worldwide, Bengali and Hindi are low-resource languages. The state-of-the-art in modeling such languages uses BERT and the Wordpiece tokenizer. We observed that the Wordpiece tokenizer often breaks words …","url":["https://aclanthology.org/2024.lrec-main.764.pdf"]} {"year":"2024","title":"Improving Cross-lingual Representation for Semantic Retrieval with Code-switching","authors":["M Maimaiti, Y Zheng, J Zhang, F Huang, Y Zhang… - arXiv preprint arXiv …, 2024"],"snippet":"Semantic Retrieval (SR) has become an indispensable part of the FAQ system in the task-oriented question-answering (QA) dialogue scenario. The demands for a cross-lingual smart-customer-service system for an e-commerce platform or some particular …","url":["https://arxiv.org/html/2403.01364v1"]} {"year":"2024","title":"INCLURE: a Dataset and Toolkit for Inclusive French Translation","authors":["P Lerner, C Grouin - The 17th Workshop on Building and Using …, 2024"],"snippet":"… Indeed, this corpus mostly contains transcripts of political speeches, whose oral style differs from the text typically found in OSCAR/CommonCrawl. Exceptions are six examples used to illustrate the use of the inclusive neutralization process …","url":["https://hal.science/hal-04531938/document"]} {"year":"2024","title":"Incorporating Geo-Diverse Knowledge into Prompting for Increased Geographical Robustness in Object Recognition","authors":["K Buettner, S Malakouti, XL Li, A Kovashka - arXiv preprint arXiv:2401.01482, 2024"],"snippet":"Existing object recognition models have been shown to lack robustness in diverse geographical scenarios due to significant domain shifts in design and context. Class representations need to be adapted to more accurately reflect an object concept …","url":["https://arxiv.org/pdf/2401.01482"]} {"year":"2024","title":"IndiBias: A Benchmark Dataset to Measure Social Biases in Language Models for Indian Context","authors":["NR Sahoo, PP Kulkarni, N Asad, A Ahmad, T Goyal… - arXiv preprint arXiv …, 2024"],"snippet":"The pervasive influence of social biases in language data has sparked the need for benchmark datasets that capture and evaluate these biases in Large Language Models (LLMs). Existing efforts predominantly focus on English language and the …","url":["https://arxiv.org/pdf/2403.20147"]} {"year":"2024","title":"IndicLLMSuite: A Blueprint for Creating Pre-training and Fine-Tuning Datasets for Indian Languages","authors":["MSUR Khan, P Mehta, A Sankar, U Kumaravelan… - arXiv preprint arXiv …, 2024"],"snippet":"Despite the considerable advancements in English LLMs, the progress in building comparable models for other languages has been hindered due to the scarcity of tailored resources. Our work aims to bridge this divide by introducing an expansive …","url":["https://arxiv.org/pdf/2403.06350"]} {"year":"2024","title":"Indiscriminate Data Poisoning Attacks on Pre-trained Feature Extractors","authors":["Y Lu, MYR Yang, G Kamath, Y Yu - arXiv preprint arXiv:2402.12626, 2024"],"snippet":"… To collect such large-scale datasets, practitioners usually extract the desired data by crawling on the internet (eg, web pages using Common Crawl1). However, using outsourced data raises an imminent security risk (Kumar et al. 2020; Nelson et al …","url":["https://arxiv.org/html/2402.12626v1"]} {"year":"2024","title":"Individual-vs. Multiple-Objective Strategies for Targeted Sentiment Analysis in Finances Using the Spanish MTSA 2023 Corpus","authors":["R Pan, JA García-Díaz, R Valencia-García - Electronics, 2024"],"snippet":"… complexity; (vi) RoBERTuito [22], a pretrained language model for user-generated content in Spanish, trained following RoBERTa guidelines on 500 million tweets; and (vii) XLM-RoBERTa [17], a multilingual version of RoBERTa, trained with data …","url":["https://www.mdpi.com/2079-9292/13/4/717/pdf"]} {"year":"2024","title":"IndoBerea: Evolving Semantic Search in Theological Context","authors":["FVP Samosir, S Mendrofa - 2023 Eighth International Conference on Informatics …, 2023"],"snippet":"This paper presents IndoBerea, a semantic search model pre-trained on an Indonesian Bible dataset and based on SentenceTransformers and IndoBERT. It aims to enhance theological research by providing contextually relevant verses in …","url":["https://ieeexplore.ieee.org/abstract/document/10382053/"]} {"year":"2024","title":"INDUS: Effective and Efficient Language Models for Scientific Applications","authors":["B Bhattacharjee, A Trivedi, M Muraoka… - arXiv preprint arXiv …, 2024"],"snippet":"Large language models (LLMs) trained on general domain corpora showed remarkable results on natural language processing (NLP) tasks. However, previous research demonstrated LLMs trained using domain-focused corpora perform better …","url":["https://arxiv.org/pdf/2405.10725"]} {"year":"2024","title":"Inference-Based No-Learning Approach on Pre-Trained BERT Model Retrieval","authors":["HL Pham, R Mibayashi, T Yamamoto, MP Kato… - 2024 IEEE International …, 2024"],"snippet":"In recent years, the practice of leveraging pre-trained machine learning models for specific tasks has gained traction. Instead of training models from the ground up, it is now common to fine-tune existing pre-trained models. However, users have the …","url":["https://ieeexplore.ieee.org/abstract/document/10488314/"]} {"year":"2024","title":"Inferring the Phylogeny of Large Language Models and Predicting their Performances in Benchmarks","authors":["N Yax, PY Oudeyer, S Palminteri - arXiv preprint arXiv:2404.04671, 2024"],"snippet":"This paper introduces PhyloLM, a method applying phylogenetic algorithms to Large Language Models to explore their finetuning relationships, and predict their performance characteristics. By leveraging the phylogenetic distance metric, we …","url":["https://arxiv.org/html/2404.04671v1"]} {"year":"2024","title":"Information Retrieval Chatbot on Military Policies and Standards","authors":["C Gunasekara, A Sharafeldin, M Triff, Z Kabir…"],"snippet":"In the Canadian Armed Forces (CAF), navigating through extensive policies and standards can be a challenging task. To address the need for streamlined access to these vital documents, this paper explores the usage of artificial intelligence (AI) and …","url":["https://www.scitepress.org/Papers/2024/123512/123512.pdf"]} {"year":"2024","title":"Information Retrieval with Dense and Sparse Representations","authors":["YS Chuang - 2024"],"snippet":"… However, they use 1TB of the training data from Common Crawl dumps while our model only use 115MB of the Wikipedia data for pretraining. We put their scores in Table 2.2 for reference. In SimCSE, the authors propose to use MLM as an auxiliary …","url":["https://dspace.mit.edu/bitstream/handle/1721.1/153774/chuang-yungsung-sm-eecs-2024-thesis.pdf?sequence=1&isAllowed=y"]} {"year":"2024","title":"Innovative Use of Self-Attention-Based Ensemble Deep Learning for Suicide Risk Detection in Social Media Posts","authors":["HS Choi, J Yang - Applied Sciences, 2024"],"snippet":"… Specifically, we employed the extensive corpus variant, known as “Common Crawl (840 B tokens, 2.2 M vocab, cased, 300 d vectors)”, which encompasses an impressive 840 billion tokens, a vocabulary of 2.2 million cased terms, and provides …","url":["https://www.mdpi.com/2076-3417/14/2/893"]} {"year":"2024","title":"Instruction-tuned Language Models are Better Knowledge Learners","authors":["Z Jiang, Z Sun, W Shi, P Rodriguez, C Zhou, G Neubig… - arXiv preprint arXiv …, 2024"],"snippet":"… However, its scope is limited to Wikipedia, which restricts the trained models’ adaptability to other sources like web pages from Common Crawl or scientific documents from arXiv. We focus on eliciting factual knowledge with instruction-tuning …","url":["https://arxiv.org/pdf/2402.12847"]} {"year":"2024","title":"Integrating Generative AI Literacy into the Information Retrieval Course at a university in Canada: towards critical evaluation of online search results","authors":["L Kleinveldt"],"snippet":"… • Trained on information (billions of words) on the open web prior to 2021 • Among other open sources, dataset comes from Common Crawl (crawls the web) and Wikipedia • Length of answers limited - between 500 and 700 words, “leaving …","url":["https://bibliotecas.uchile.cl/congreso/programa/ponencias/dia_3/8_Presentation_Integrating_Generative_AI_Literacy.pdf"]} {"year":"2024","title":"Intelligent Pharmacy","authors":["A Haleem, M Javaid, RP Singh"],"snippet":"The modern language generation model ChatGPT, created by Open Artificial Intelligence (AI), is recognised for its capacity to comprehend context and produce pertinent content. This model is built on the transformer architecture, which enables …","url":["https://www.researchgate.net/profile/Mohd-Javaid/publication/379261748_Exploring_the_competence_of_ChatGPT_for_customer_and_patient_service_management/links/660e825af5a5de0a9ffb6418/Exploring-the-competence-of-ChatGPT-for-customer-and-patient-service-management.pdf"]} {"year":"2024","title":"Intelligent Phishing Website Detection Model Powered by Deep Learning Techniques","authors":["U Chetachi, O Henry, OA Agbugba - Asian Journal of Research in Computer …, 2024"],"snippet":"… Approximately one million authentic and fraudulent URLs were employed in the dataset gathered from PhishTank and Common Crawl. Over 10,000 images and one million URLs were used in training for the CNN classifier and LSTM to create the …","url":["http://science.sdpublishers.org/id/eprint/2498/1/Chetachi1762023AJRCOS111125.pdf"]} {"year":"2024","title":"Intercity relationships between 293 Chinese cities quantified based on toponym co-occurrence","authors":["W Tongjing, Z Yin, Z Bao, E Meijers - Cybergeo: European Journal of Geography, 2024"],"snippet":"… 14The primary dataset of this study was obtained from the Common Crawl, a web archive that has periodically crawled the Internet since … We used the entire Common Crawl text archive from April 2019 for processing and conducting …","url":["https://journals.openedition.org/cybergeo/40721"]} {"year":"2024","title":"International Journal of Cognitive Computing in Engineering","authors":["JF Ruma, TT Mayeesha, RM Rahman"],"snippet":"… MC4 dataset contains 101 language variants drawn from the Common Crawl web scrape. MT5 did not deviates much from the original T5 model and attempted to extend the capacities of the T5 to multilingual settings. T5 is a pre-trained language …","url":["https://www.researchgate.net/profile/Tahsin-Mayeesha-2/publication/373997715_Transformer_based_Answer-Aware_Bengali_Question_Generation/links/65b4bfe81bed776ae307bf9f/Transformer-based-Answer-Aware-Bengali-Question-Generation.pdf"]} {"year":"2024","title":"Internet-scale Topic Modeling using Large Language Models","authors":["R Kajoluoto - 2024"],"snippet":"… This proportion is estimated from the Common Crawl dataset’s statistics [28]. In short, the model should have an excellent understanding of English, an adequate understanding of languages such as German and Spanish, and be satisfactory at …","url":["https://aaltodoc.aalto.fi/bitstreams/b63ce9d3-a01a-48a1-8ff7-831c872fde9d/download"]} {"year":"2024","title":"InternLM-Math: Open Math Large Language Models Toward Verifiable Reasoning","authors":["H Ying, S Zhang, L Li, Z Zhou, Y Shao, Z Fei, Y Ma… - arXiv preprint arXiv …, 2024"],"snippet":"The math abilities of large language models can represent their abstract reasoning ability. In this paper, we introduce and open-source our math reasoning LLMs InternLM-Math which is continue pre-trained from InternLM2. We unify chain-of-thought …","url":["https://arxiv.org/pdf/2402.06332"]} {"year":"2024","title":"InternLM2 Technical Report","authors":["Z Cai, M Cao, H Chen, K Chen, K Chen, X Chen… - arXiv preprint arXiv …, 2024"],"snippet":"… Our web page data mainly comes from Common Crawl1. Firstly, we need to decompress the original Warc format files and use Trafilatura (… The entire filtering process eliminates a large proportion of Web page data (Common Crawl) and Patent …","url":["https://arxiv.org/pdf/2403.17297"]} {"year":"2024","title":"Interpretable Tensor Fusion","authors":["S Varshneya, A Ledent, P Liznerski, A Balinskyy… - arXiv preprint arXiv …, 2024"],"snippet":"Conventional machine learning methods are predominantly designed to predict outcomes based on a single data type. However, practical applications may encompass data of diverse types, such as text, images, and audio. We introduce …","url":["https://arxiv.org/pdf/2405.04671"]} {"year":"2024","title":"Interpreting Themes from Educational Stories","authors":["Y Zhang, FA González, T Solorio - arXiv preprint arXiv:2404.05250, 2024"],"snippet":"Reading comprehension continues to be a crucial research focus in the NLP community. Recent advances in Machine Reading Comprehension (MRC) have mostly centered on literal comprehension, referring to the surface-level …","url":["https://arxiv.org/pdf/2404.05250"]} {"year":"2024","title":"Intersectional Male-Centric and White-Centric Biases in Collective Concepts","authors":["AH Bailey, A Williams, A Poddar, A Cimpian - 2024"],"snippet":"… Here we used NLP tools (specifically, static word embeddings trained on the Common Crawl corpus) to investigate collective understanding of three fundamental concepts: PERSON, WOMAN, and MAN.These three concepts organize much of …","url":["https://osf.io/72pg5/download"]} {"year":"2024","title":"Introducing the Djinni Recruitment Dataset: A Corpus of Anonymized CVs and Job Postings","authors":["N Drushchak, M Romanyshyn - Proceedings of the Third Ukrainian Natural …, 2024"],"snippet":"This paper introduces the Djinni Recruitment Dataset, a large-scale open-source corpus of candidate profiles and job descriptions. With over 150,000 jobs and 230,000 candidates, the dataset includes samples in English and Ukrainian, thereby …","url":["https://aclanthology.org/2024.unlp-1.2.pdf"]} {"year":"2024","title":"Investigating Complex Answer Attribution Approaches with Large Language Models","authors":["L Mülln"],"snippet":"This master’s thesis explores the attribution of answers in complex question-answering scenarios utilizing large language models (LLMs). The research aims to assess and enhance the traceability of answers back to their source documents, a critical aspect …","url":["https://wwwmatthes.in.tum.de/file/833ntsa6pimy/Sebis-Public-Website/Student-Theses-Guided-Research/Current-Theses-Guided-Researches/Master-s-Thesis-Luca-Muelln/240315_LMuelln_Thesis_Final_Print_Version.pdf"]} {"year":"2024","title":"Investigating Gender Bias in Turkish Language Models","authors":["O Caglidil, M Ostendorff, G Rehm - arXiv preprint arXiv:2404.11726, 2024"],"snippet":"… an analysis of undesirable content in the common crawl corpus. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2 …","url":["https://arxiv.org/pdf/2404.11726"]} {"year":"2024","title":"Investigating the pre-training bias in low-resource abstractive summarization","authors":["D Chernyshev, B Dobrov - IEEE Access, 2024"],"snippet":"Recent advances in low-resource abstractive summarization were largely made through the adoption of specialized pre-training, pseudo-summarization, that integrates the content selection knowledge through various centrality-based …","url":["https://ieeexplore.ieee.org/iel7/6287639/6514899/10474365.pdf"]} {"year":"2024","title":"IPA Transcription of Bengali Texts","authors":["K Fatema, FD Haider, NF Turpa, T Azmal, S Ahmed… - arXiv preprint arXiv …, 2024"],"snippet":"The International Phonetic Alphabet (IPA) serves to systematize phonemes in language, enabling precise textual representation of pronunciation. In Bengali phonology and phonetics, ongoing scholarly deliberations persist concerning the …","url":["https://arxiv.org/pdf/2403.20084"]} {"year":"2024","title":"IrokoBench: A New Benchmark for African Languages in the Age of Large Language Models","authors":["DI Adelani, J Ojo, IA Azime, JY Zhuang, JO Alabi, X He… - arXiv preprint arXiv …, 2024"],"snippet":"Despite the widespread adoption of Large language models (LLMs), their remarkable capabilities remain limited to a few high-resource languages. Additionally, many low-resource languages (eg African languages) are often …","url":["https://arxiv.org/pdf/2406.03368"]} {"year":"2024","title":"Is AI changing learning and assessment as we know it? Evidence from a ChatGPT experiment and a conceptual framework","authors":["O Kolade, A Owoseni, A Egbetokun - Heliyon, 2024"],"snippet":"ChatGPT, a state-of-the-art chatbot built upon Open AI's generative pre-trained transformer, has generated a major public interest and caused quite a stir in the higher education sector, where reactions have ranged from excitement to …","url":["https://www.sciencedirect.com/science/article/pii/S2405844024019844"]} {"year":"2024","title":"Is ChatGPT taking over the language classroom? How language ideologies of large language models impact teaching and learning","authors":["M Lau - Working papers in Applied Linguistics and Linguistics …, 2024"],"snippet":"… Thus, using datasets such as the Common Crawl can lead to the over-representation of English and the perspectives of English-speaking content … This means that LLMs that use Common Crawl for training, such as ChatGPT, may perform with …","url":["https://wally.journals.yorku.ca/index.php/default/article/download/36/34"]} {"year":"2024","title":"Is Google Getting Worse? A Longitudinal Investigation of SEO Spam in Search Engines","authors":["J Bevendorff, M Wiegmann, M Potthast, B Stein"],"snippet":"Many users of web search engines have been complaining in recent years about the supposedly decreasing quality of search results. This is often attributed to an increasing amount of search-engineoptimized but low-quality content. Evidence for …","url":["https://downloads.webis.de/publications/papers/bevendorff_2024a.pdf"]} {"year":"2024","title":"Is Translation All You Need? A Study on Solving Multilingual Tasks with Large Language Models","authors":["C Liu, W Zhang, Y Zhao, AT Luu, L Bing - arXiv preprint arXiv:2403.10258, 2024"],"snippet":"… We categorize languages larger than 1% frequency in Common Crawl2 as high-resource languages (ie, de, ru, fr, zh, es, ja, it and vi), and the rest as low-resource languages. We exclude English since we want to evaluate the efficient prompting strategy for …","url":["https://arxiv.org/pdf/2403.10258"]} {"year":"2024","title":"It's About Time: Incorporating Temporality in Retrieval Augmented Language Models","authors":["A Gade, J Jetcheva - arXiv preprint arXiv:2401.13222, 2024"],"snippet":"… Additionally both models are fine-tuned jointly as a Fusion-in-Decoder [3] on common-crawl data using Wikipedia as a document index. The authors showed that Atlas performs well out-of-box and can be adapted to different tasks such as question-answering …","url":["https://arxiv.org/pdf/2401.13222"]} {"year":"2024","title":"IUST at ClimateActivism 2024: Towards Optimal Stance Detection: A Systematic Study of Architectural Choices and Data Cleaning Techniques","authors":["G Mahmoudi, S Eetemadi - Proceedings of the 7th Workshop on Challenges and …, 2024"],"snippet":"This work presents a systematic search of various model architecture configurations and data cleaning methods. The study evaluates the impact of data cleaning methods on the obtained results. Additionally, we demonstrate that a combination of …","url":["https://aclanthology.org/2024.case-1.24.pdf"]} {"year":"2024","title":"Jack of All Trades, Master of Some, a Multi-Purpose Transformer Agent","authors":["Q Gallouédec, E Beeching, C Romac, E Dellandréa - arXiv preprint arXiv:2402.09844, 2024"],"snippet":"The search for a general model that can operate seamlessly across multiple domains remains a key goal in machine learning research. The prevailing methodology in Reinforcement Learning (RL) typically limits models to a single task …","url":["https://arxiv.org/pdf/2402.09844"]} {"year":"2024","title":"Japanese Pointer Network based Entity Linker for Wikidata","authors":["Y Sawamura, T Morita, S Egami, T Ugai, K Fukuda - 2023"],"snippet":"… Pre-trained fastText is a model trained on Common Crawl and Wikipedia and provides pre-computed vectors for a word. The length of the POS Tags was changed from 36 to 18. The other five embeddings, namely Entity Embedding, Text Match …","url":["https://ijckg2023.knowledge-graph.jp/pages/proc/paper_18.pdf"]} {"year":"2024","title":"JetMoE: Reaching Llama2 Performance with 0.1 M Dollars","authors":["Y Shen, Z Guo, T Cai, Z Qin - arXiv preprint arXiv:2404.07413, 2024"],"snippet":"Large Language Models (LLMs) have achieved remarkable results, but their increasing resource demand has become a major obstacle to the development of powerful and accessible super-human intelligence. This report introduces JetMoE-8B …","url":["https://arxiv.org/pdf/2404.07413"]} {"year":"2024","title":"Jina CLIP: Your CLIP Model Is Also Your Text Retriever","authors":["A Koukounas, G Mastrapas, M Günther, B Wang… - arXiv preprint arXiv …, 2024"],"snippet":"Contrastive Language-Image Pretraining (CLIP) is widely used to train models to align images and texts in a common embedding space by mapping them to fixed-sized vectors. These models are key to multimodal information retrieval and related tasks …","url":["https://arxiv.org/pdf/2405.20204"]} {"year":"2024","title":"JiuZhang3. 0: Efficiently Improving Mathematical Reasoning by Training Small Data Synthesis Models","authors":["K Zhou, B Zhang, J Wang, Z Chen, WX Zhao, J Sha… - arXiv preprint arXiv …, 2024"],"snippet":"Mathematical reasoning is an important capability of large language models~(LLMs) for real-world applications. To enhance this capability, existing work either collects large-scale math-related texts for pre-training, or relies on stronger LLMs (\\eg GPT-4) …","url":["https://arxiv.org/pdf/2405.14365"]} {"year":"2024","title":"JLBert: Japanese Light BERT for Cross-Domain Short Text Classification","authors":["C Kayal, S Chattopadhyay, A Gupta, S Abrol, A Gugol - Proceedings of the 2024 Joint …, 2024"],"snippet":"… Second, there are limited models that have been trained on diverse datasets beyond Wikipedia and Common Crawl, limiting experimentation with other forms of text data. Moreover, with the English language being the focus of research interest …","url":["https://aclanthology.org/2024.lrec-main.833.pdf"]} {"year":"2024","title":"Juru: Legal Brazilian Large Language Model from Reputable Sources","authors":["RM Junior, R Pires, R Romero, R Nogueira - arXiv preprint arXiv:2403.18140, 2024"],"snippet":"The high computational cost associated with pretraining large language models limits their research. Two strategies have emerged to address this issue: domain specialization and pretraining with high-quality data. To explore these strategies, we …","url":["https://arxiv.org/pdf/2403.18140"]} {"year":"2024","title":"Just Rewrite It Again: A Post-Processing Method for Enhanced Semantic Similarity and Privacy Preservation of Differentially Private Rewritten Text","authors":["S Meisenbacher, F Matthes - arXiv preprint arXiv:2405.19831, 2024"],"snippet":"The study of Differential Privacy (DP) in Natural Language Processing often views the task of text privatization as a $\\textit{rewriting}$ task, in which sensitive input texts are rewritten to hide explicit or implicit private information. In order to evaluate the …","url":["https://arxiv.org/pdf/2405.19831"]} {"year":"2024","title":"KazQAD: Kazakh Open-Domain Question Answering Dataset","authors":["R Yeshpanov, P Efimov, L Boytsov, A Shalkarbayuli… - arXiv preprint arXiv …, 2024"],"snippet":"… Kaz-RoBERTa6 is a monolingual model trained on a collection of Kazakh texts from Common Crawl, the Leipzig Corpora Collection, the … XLM-R was pre-trained on a larger multilingual corpus derived from Common Crawl, in which Kazakh is …","url":["https://arxiv.org/pdf/2404.04487"]} {"year":"2024","title":"KazSAnDRA: Kazakh Sentiment Analysis Dataset of Reviews and Attitudes","authors":["R Yeshpanov, HA Varol - arXiv preprint arXiv:2403.19335, 2024"],"snippet":"This paper presents KazSAnDRA, a dataset developed for Kazakh sentiment analysis that is the first and largest publicly available dataset of its kind. KazSAnDRA comprises an extensive collection of 180,064 reviews obtained from various sources …","url":["https://arxiv.org/pdf/2403.19335"]} {"year":"2024","title":"KEPA-CRF: Knowledge expansion prototypical amortized conditional random field for few-shot event detection","authors":["R Wu, L Yu, S Tian, J Long, T Zhou, B Wang - Journal of Intelligent & Fuzzy Systems"],"snippet":"Event Detection (ED) has long struggled with the ambiguous definition of event categories, making it challenging to accurately classify events. Previous endeavors aimed to tackle this problem by constructing prototypes for specific event categories …","url":["https://content.iospress.com/articles/journal-of-intelligent-and-fuzzy-systems/ifs234368"]} {"year":"2024","title":"Key ingredients for effective zero-shot cross-lingual knowledge transfer in generative tasks","authors":["N Chirkova, V Nikoulina - arXiv preprint arXiv:2402.12279, 2024"],"snippet":"… For Intermediate tuning we finetune models for 100k steps on the CommonCrawl data uniformly sampled across all target languages and English, with the batch size of 5k tokens and the LR chosen to optimize fluency of model generations, inspected …","url":["https://arxiv.org/pdf/2402.12279"]} {"year":"2024","title":"Knowledge Induced Transformer Network for Causality Prediction","authors":["T Dasgupta, M Sinha, A Naskar - Companion Proceedings of the ACM on Web …, 2024"],"snippet":"… The CausalBank dataset3 contains 314 million pairs of cause-effect statements scraped from the Common Crawl corpus using causal lexical patterns. Baselines: Most of the earlier work on causality detection is based on identifying cause, effect …","url":["https://dl.acm.org/doi/abs/10.1145/3589335.3651531"]} {"year":"2024","title":"Knowledge Sources","authors":["M Jiang, BY Lin, S Wang, Y Xu, W Yu, C Zhu - Knowledge-augmented Methods for …, 2024"],"snippet":"… To be more specific, the dataset includes Pile-CommonCrawl (227.12G, web crawling data), PubMed Central (90.27G, biomedical articles), Books (100.96G, a mix of fiction and nonfiction books), OpenWebText (262.77G, content from Reddit) …","url":["https://link.springer.com/chapter/10.1007/978-981-97-0747-8_2"]} {"year":"2024","title":"Knowledge Transfer from Vision Foundation Models for Efficient Training of Small Task-specific Models","authors":["R Vemulapalli, H Pouransari, F Faghri, S Mehta… - Forty-first International Conference …"],"snippet":"… 2023) which contains 1.28B images filtered from Common Crawl (web-crawled data). We take 10 randomly augmented crops from each image and create a gallery of 12.8B images. See Appendix B.2 for additional details about this gallery set and …","url":["https://openreview.net/pdf?id=OKYfaYQlML"]} {"year":"2024","title":"Knowledge-augmented Methods for Natural Language Processing","authors":["M Jiang"],"snippet":"Ever since language was invented and used, humans have leveraged this powerful tool to pass down experience over generations. These kinds of knowledge summarize precious findings and inspiring ideas, which constitute the human …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=tlsAEQAAQBAJ&oi=fnd&pg=PR5&dq=commoncrawl&ots=XzQIdUcgNg&sig=EzGLwwhZB2mo602CaK_dytKIWaY"]} {"year":"2024","title":"Knowledge-Enhanced Neural Networks for Machine Reading Comprehension","authors":["TB Mihaylov - 2024"],"snippet":"Machine Reading Comprehension is a language understanding task where a system is expected to read a given passage of text and typically answer questions about it. When humans assess the task of reading comprehension, in addition to the …","url":["https://archiv.ub.uni-heidelberg.de/volltextserver/34352/1/Thesis_Todor_Mihaylov_Camera_Ready.pdf"]} {"year":"2024","title":"KOMPYUTER LINGVISTIKASIDA TA'MINOT MASALASI VA SINONIM BIRLIKLARNING LINGVISTIK TA'MINOTI","authors":["N SOTVOLDIYEVA - News of UzMU journal, 2024"],"snippet":"… Lingvistik ma'lumotlar konsorsiumi (LDC) va Common Crawl[8] loyihasi kabi ochiq hamkorlik tashabbuslari lingvistik resurslarning mavjudligiga sezilarli hissa qo'shdi. Bu hamkorlik resurslar almashinuvini targ‘ib qiladi, ortiqchalikni kamaytiradi va …","url":["https://journalsnuu.uz/index.php/1/article/download/803/266"]} {"year":"2024","title":"Krishiq-BERT: A Few-Shot Setting BERT Model to Answer Agricultural-Related Questions in the Kannada Language","authors":["P Ajawan, V Desai, S Kale, S Patil - Journal of The Institution of Engineers (India) …, 2024"],"snippet":"… corpora for 11 languages from two language families obtained primarily by sourcing articles from news crawls [3], and OSCAR (Open Super-large Crawled ALMAnaCH coRpus), a huge multilingual corpus procured by language …","url":["https://link.springer.com/article/10.1007/s40031-023-00952-6"]} {"year":"2024","title":"KVP10k: A Comprehensive Dataset for Key-Value Pair Extraction in Business Documents","authors":["O Naparstek, R Pony, I Shapira, FA Dahood, O Azulai… - arXiv preprint arXiv …, 2024"],"snippet":"… : extensive web data from Common Crawl and a collection of images from publicfiles.fcc.gov. From Common Crawl, we employed a systematic … A schematic describing the data acquisition process for the common crawl data is shown in Fig.2 …","url":["https://arxiv.org/pdf/2405.00505"]} {"year":"2024","title":"Labadain-30k+: A Monolingual Tetun Document-Level Audited Dataset","authors":["G de Jesus, S Nunes - Proceedings of the 3rd Annual Meeting of the Special …, 2024"],"snippet":"This paper introduces Labadain-30k+, a monolingual dataset comprising 33.6 k documents in Tetun, a low-resource language spoken in Timor-Leste. The dataset was acquired through web crawling and augmented with Wikipedia documents …","url":["https://aclanthology.org/2024.sigul-1.22.pdf"]} {"year":"2024","title":"Language Model Frame Filling for Low Resource Languages","authors":["L Zong"],"snippet":"Acceptability judgment tasks are key tools used in the understanding of the grammar of languages by linguists. A typical way to construct these tasks is to use semantic frames. We provide evidence that the fill-mask objective for language models (LMs) …","url":["https://leonzong.com/papers/csc247_final.pdf"]} {"year":"2024","title":"Language Models Fine-Tuning for Automatic Format Reconstruction of SEC Financial Filings","authors":["G Lombardo, G Trimigno, M Pellegrino, S Cagnoni - IEEE Access, 2024"],"snippet":"The analysis of financial reports is a crucial task for investors and regulators, especially the mandatory annual reports (10-K) required by the SEC (Securities and Exchange Commission) that provide crucial information about a public company in …","url":["https://ieeexplore.ieee.org/iel7/6287639/6514899/10445214.pdf"]} {"year":"2024","title":"Language Models on a Diet: Cost-Efficient Development of Encoders for Closely-Related Languages via Additional Pretraining","authors":["N Ljubešić, V Suchomel, P Rupnik, T Kuzman… - arXiv preprint arXiv …, 2024"],"snippet":"The world of language models is going through turbulent times, better and ever larger models are coming out at an unprecedented speed. However, we argue that, especially for the scientific community, encoder models of up to 1 billion parameters …","url":["https://arxiv.org/pdf/2404.05428"]} {"year":"2024","title":"Language models scale reliably with over-training and on downstream tasks","authors":["SY Gadre, G Smyrnis, V Shankar, S Gururangan… - arXiv preprint arXiv …, 2024"],"snippet":"Scaling laws are useful guides for developing language models, but there are still gaps between current scaling studies and how language models are ultimately trained and evaluated. For instance, scaling is usually studied in the compute-optimal …","url":["https://arxiv.org/pdf/2403.08540"]} {"year":"2024","title":"Large Language Models (LLMs) on Tabular Data: Predic-tion, Generation, and Understanding-A Survey","authors":["X Fang, W Xu, FA Tan, J Zhang, Z Hu, Y Qi… - arXiv preprint arXiv …, 2024"],"snippet":"Recent breakthroughs in large language modeling have facilitated rigorous exploration of their application in diverse tasks related to tabular data modeling, such as prediction, tabular data synthesis, question answering, and table …","url":["https://stewarthu.com/papers/LLM-on-tabular-data.pdf"]} {"year":"2024","title":"Large Language Models and the Disappearing Private Sphere","authors":["A Cappello, M Dada, V Grigoreva, R Khan, C Stinson… - 2024"],"snippet":"… The higher quality datasets are sampled more often during training, but the Reddit approved contents of CommonCrawl still … of CommonCrawl contains fan fiction, video game chats, conspiracy theories, pornography, junk advertising, and …","url":["https://etlab.cs.queensu.ca/files/2024/04/OPC_Final_Report_2023.pdf"]} {"year":"2024","title":"Large Language Models aren't all that you need","authors":["KV Holla, C Kumar, A Singh - arXiv preprint arXiv:2401.00698, 2024"],"snippet":"This paper describes the architecture and systems built towards solving the SemEval 2023 Task 2: MultiCoNER II (Multilingual Complex Named Entity Recognition) [1]. We evaluate two approaches (a) a traditional Conditional Random …","url":["https://arxiv.org/pdf/2401.00698"]} {"year":"2024","title":"LARGE LANGUAGE MODELS AS AN AID FOR SOFTWARE DEVELOPMENT USING GRAPHICAL PROGRAMMING LANGUAGE.","authors":["V Tiljander - 2023"],"snippet":"… In pre-training, the model is trained with the Common Crawl dataset, internet books, and Python code, summing up to 400B tokens. After the model is pre-trained, the phase preference model pre-training comes, and after that, the preference model …","url":["https://trepo.tuni.fi/bitstream/handle/10024/153734/TiljanderVille.pdf?sequence=2"]} {"year":"2024","title":"Large language models as oracles for instantiating ontologies with domain-specific knowledge","authors":["G Ciatto, A Agiollo, M Magnini, A Omicini - arXiv preprint arXiv:2404.04108, 2024"],"snippet":"… Models following the RoBERTa [30] approach were trained on (i) the same datasets as BERT, plus (ii) a sample of the CommonCrawl4 News dataset (a dump of news … 4https://commoncrawl.org 5https://mistral.ai/news/mixtral-of-experts …","url":["https://arxiv.org/html/2404.04108v1"]} {"year":"2024","title":"Large language models for biomedicine: foundations, opportunities, challenges, and best practices","authors":["SS Sahoo, JM Plasek, H Xu, Ö Uzuner, T Cohen… - Journal of the American …, 2024"],"snippet":"… In the pre-training phase, generic training data, such as the Common Crawl dataset or social media posts, are used for unsupervised learning.With further training on prompt-response pairs (such as the Alpaca dataset developed at Stanford23) …","url":["https://academic.oup.com/jamia/advance-article/doi/10.1093/jamia/ocae074/7657768"]} {"year":"2024","title":"Large Language Models for Forecasting and Anomaly Detection: A Systematic Literature Review","authors":["J Su, C Jiang, X Jin, Y Qiao, T Xiao, H Ma, R Wei… - arXiv preprint arXiv …, 2024"],"snippet":"… The training data for GPT-3 includes the Common Crawl dataset for lower quality, as well as the WebText2 dataset for higher quality, as well as the Books1, Books2, and Wikipedia datasets for higher quality [148]. GPT-3 assigns different weights to …","url":["https://arxiv.org/pdf/2402.10350"]} {"year":"2024","title":"Large Language Models for Human-Machine Collaborative Particle Accelerator Tuning through Natural Language","authors":["J Kaiser, A Eichler, A Lauscher - arXiv preprint arXiv:2405.08888, 2024"],"snippet":"Autonomous tuning of particle accelerators is an active and challenging field of research with the goal of enabling novel accelerator technologies cutting-edge high-impact applications, such as physics discovery, cancer research and material sciences. A …","url":["https://arxiv.org/pdf/2405.08888"]} {"year":"2024","title":"Large Language Models for identification of medical data in unstructured records","authors":["P Petkov, E Markov, L Tomov"],"snippet":"… Sources for the training data include web pages extracted by CommonCrawl, opensource GitHub repositories, Wikipedia in twenty languages, Project Gutenberg books in the public domain, LaTeX source code of scientific articles in ArXiv and …","url":["https://www.researchgate.net/profile/Latchezar-Tomov/publication/379754048_Large_Language_Models_for_identification_of_medical_data_in_unstructured_records/links/661927fff7d3fc28744fc7cf/Large-Language-Models-for-identification-of-medical-data-in-unstructured-records.pdf"]} {"year":"2024","title":"Large Language Models for Reliable Information Extraction","authors":["L Baliunas"],"snippet":"This project addresses the ongoing challenge of achieving reliable Information Extraction (IE), particularly in domains which require near-perfect precision. ELICIT (Butcher et al., 2023) introduced a novel approach that combines the processing speed of …","url":["https://www.mlmi.eng.cam.ac.uk/files/2022_-_2023_dissertations/large_language_models_for_reliable_information_extraction.pdf"]} {"year":"2024","title":"Large Language Models on Tabular Data--A Survey","authors":["X Fang, W Xu, FA Tan, J Zhang, Z Hu, Y Qi… - arXiv preprint arXiv …, 2024"],"snippet":"Recent breakthroughs in large language modeling have facilitated rigorous exploration of their application in diverse tasks related to tabular data modeling, such as prediction, tabular data synthesis, question answering, and table …","url":["https://arxiv.org/pdf/2402.17944"]} {"year":"2024","title":"Large Language Models: A Survey","authors":["S Minaee, T Mikolov, N Nikzad, M Chenaghlu… - arXiv preprint arXiv …, 2024"],"snippet":"… Common Crawlbased dataset consisting of texts in 101 languages. … CommonCrawl. They also released an extract of 600 billion tokens from our REFINEDWEB dataset, and 1.3/7.5B parameters language models trained on it. 27 …","url":["https://arxiv.org/pdf/2402.06196"]} {"year":"2024","title":"Large Malaysian Language Model Based on Mistral for Enhanced Local Language Understanding","authors":["H Zolkepli, A Razak, K Adha, A Nazhan - arXiv preprint arXiv:2401.13565, 2024"],"snippet":"… An analysis of the widely utilized Common Crawl dataset reveals a mere 0.0742% contribution from the Malay language based on CCMAIN-… We also replicated the same technique for Malaysia Hansard and MS CommonCrawl samples. All synthetic …","url":["https://arxiv.org/pdf/2401.13565"]} {"year":"2024","title":"Large-language models facilitate discovery of the molecular signatures regulating sleep and activity","authors":["D Peng, L Zheng, D Liu, C Han, X Wang, Y Yang… - Nature Communications, 2024"],"snippet":"Sleep, locomotor and social activities are essential animal behaviors, but their reciprocal relationships and underlying mechanisms remain poorly understood. Here, we elicit information from a cutting-edge large-language model (LLM), generative pre-trained …","url":["https://www.nature.com/articles/s41467-024-48005-w"]} {"year":"2024","title":"Large-Language-Models (LLM)-Based AI Chatbots: Architecture, In-Depth Analysis and Their Performance Evaluation","authors":["V Kumar, P Srivastava, A Dwivedi, I Budhiraja… - International Conference on …, 2023"],"snippet":"… LLaMA models span from 7 billion to 65 billion parameters and were trained on trillions of tokens derived from publicly accessible datasets, including Common Crawl and Wikipedia. The primary objective of LLaMA models is to serve as …","url":["https://link.springer.com/chapter/10.1007/978-3-031-53085-2_20"]} {"year":"2024","title":"Latxa: An Open Language Model and Evaluation Suite for Basque","authors":["J Etxaniz, O Sainz, N Perez, I Aldabe, G Rigau, E Agirre… - arXiv preprint arXiv …, 2024"],"snippet":"We introduce Latxa, a family of large language models for Basque ranging from 7 to 70 billion parameters. Latxa is based on Llama 2, which we continue pretraining on a new Basque corpus comprising 4.3M documents and 4.2B tokens. Addressing the …","url":["https://arxiv.org/pdf/2403.20266"]} {"year":"2024","title":"Law and the Political Economy of AI Production","authors":["P Terzis"],"snippet":"The governance of Artificial Intelligence (AI) is at a historical juncture. Legislative acts, global treaties, export controls, and technical standards are now dominating the discourse over what used to be a predominantly market-driven space. Amidst all this …","url":["https://osf.io/q593j/download"]} {"year":"2024","title":"Law, Technology and Humans","authors":["J De Cooman"],"snippet":"… Around 60% of GPT-3 training data comes from a filtered version of Common Crawl—that is, a non-profit organisation that archives and provides open-access petabytes of data collected since 2011. [30] The remaining 40% comes from diverse …","url":["https://www.austlii.edu.au/au/journals/LawTechHum/2023/3.html"]} {"year":"2024","title":"Learning with Noisy Foundation Models","authors":["H Chen, J Wang, Z Wang, R Tao, H Wei, X Xie… - arXiv preprint arXiv …, 2024"],"snippet":"… GPT2 is pre-trained on WebText [29], a scraped web dataset from Common Crawl that contains low-quality raw texts. We also leverage OpenAI’s API service “text-ada-002”. We cannot use larger and more recent language models such as LLaMA [30], since …","url":["https://arxiv.org/pdf/2403.06869"]} {"year":"2024","title":"Leveraging AI to Deliver Culturally Responsive Mental Health Care at Scale","authors":["A Cerezo, D Cooper, V Palat, A Jolley, S Peregrine"],"snippet":"… One of the most commonly used datasets to train LLMs is The Common Crawl dataset, a large collection of web pages scraped from the internet since 2014. These pages contain the gamut from high-quality news, science, and literature to the …","url":["https://s3.ca-central-1.amazonaws.com/assets.jmir.org/assets/preprints/preprint-56965-submitted.pdf"]} {"year":"2024","title":"Leveraging distant supervision and deep learning for twitter sentiment and emotion classification","authors":["M Kastrati, Z Kastrati, A Shariq Imran, M Biba - Journal of Intelligent Information …, 2024"],"snippet":"Nowadays, various applications across industries, healthcare, and security have begun adopting automatic sentiment analysis and emotion detection in short texts, such as posts from social media. Twitter stands out as one of the most popular …","url":["https://link.springer.com/article/10.1007/s10844-024-00845-0"]} {"year":"2024","title":"Leveraging Open Large Language Models for Multilingual Policy Topic Classification: The Babel Machine Approach","authors":["M Sebők, Á Máté, O Ring, V Kovács, R Lehoczki - Social Science Computer Review, 2024"],"snippet":"The article presents an open-source and freely available natural language processing system for comparative policy studies. The CAP Babel Machine allows for the automated classification of input files based on the 21 major policy topics of …","url":["https://journals.sagepub.com/doi/pdf/10.1177/08944393241259434"]} {"year":"2024","title":"Leveraging Parameter Efficient Training Methods for Low Resource Text Classification: A case study in Marathi","authors":["P Deshmukh, N Kulkarni, S Kulkarni, K Manghani… - 2024 IEEE 9th International …, 2024"],"snippet":"With the surge in digital content in low-resource languages, there is an escalating demand for advanced Natural Language Processing (NLP) techniques tailored to these languages. BERT (Bidirectional Encoder Representations from Transformers) …","url":["https://ieeexplore.ieee.org/abstract/document/10543946/"]} {"year":"2024","title":"Leveraging Zero-Shot Prompting for Efficient Language Model Distillation","authors":["L Vöge, V Gurgul, S Lessmann - arXiv preprint arXiv:2403.15886, 2024"],"snippet":"… Specifically, we use the T5 v1.1 models which were only pretrained on the C4 Common Crawl dataset for general language understanding without additional task-specific finetuning. The use of text-to-text models allows for straightforward processing, as …","url":["https://arxiv.org/pdf/2403.15886"]} {"year":"2024","title":"Lib2Life–Digital Library Services Empowered with Advanced Natural Language Processing Techniques","authors":["M Nitu, M Dascalu, MD Dascalu, LM Neagu…"],"snippet":"Educational institutions are struggling to keep up with the accelerated technological advancements; hence, sustainable and supportive tools have become essential to reshape traditional models into intelligent learning systems. This paper introduces …","url":["https://www.researchgate.net/profile/Nitu-Melania-2/publication/380769622_Lib2Life_-_Digital_Library_Services_Empowered_with_Advanced_Natural_Language_Processing_Techniques/links/664efa3022a7f16b4f43a5e1/Lib2Life-Digital-Library-Services-Empowered-with-Advanced-Natural-Language-Processing-Techniques.pdf"]} {"year":"2024","title":"LiBERTa: Advancing Ukrainian Language Modeling through Pre-training from Scratch","authors":["M Haltiuk, A Smywiński-Pohl - Proceedings of the Third Ukrainian Natural Language …, 2024"],"snippet":"… CC-100, a multilingual text corpus sourced from Wikipedia and CommonCrawl, was processed following the CCNet1 methodology. The Ukrainian segment of CC-100 encompasses 6.5 billion tokens, equivalent to 84 GiB of data2. This corpus primarily …","url":["https://aclanthology.org/2024.unlp-1.14.pdf"]} {"year":"2024","title":"Limitations of Language Models in The Oil & Gas Upstream Operations","authors":["A Alsultan, FA Razak - International Petroleum Technology Conference, 2024"],"snippet":"… Although the process is simple in principle, it is done on very large training corpra which can be as big as dataset provided by the Common Crawl project (428.81 TB as of the date of writing this paper). Thus, training these models using these large …","url":["https://onepetro.org/IPTCONF/proceedings-abstract/24IPTC/All-24IPTC/542606"]} {"year":"2024","title":"LINGOLY: A Benchmark of Olympiad-Level Linguistic Reasoning Puzzles in Low-Resource and Extinct Languages","authors":["AM Bean, S Hellsten, H Mayne, J Magomere, EA Chi… - arXiv preprint arXiv …, 2024"],"snippet":"… ], while the second and fourth rows are sorted by Common Crawl Share [16]. The upper two … of pages tagged with a given language in the Common Crawl dataset [16]. Figure 12 show the … LLMs are trained on, using the Common Crawl share and …","url":["https://arxiv.org/pdf/2406.06196"]} {"year":"2024","title":"Linguistic Profiling of Deepfakes: An Open Database for Next-Generation Deepfake Detection","authors":["Y Wang, Z Huang, Z Ma, X Hong - arXiv preprint arXiv:2401.02335, 2024"],"snippet":"The emergence of text-to-image generative models has revolutionized the field of deepfakes, enabling the creation of realistic and convincing visual content directly from textual descriptions. However, this advancement presents considerably greater …","url":["https://arxiv.org/pdf/2401.02335"]} {"year":"2024","title":"Linguistic Structure from a Bottleneck on Sequential Information Processing","authors":["R Futrell, M Hahn - arXiv preprint arXiv:2405.12109, 2024"],"snippet":"… Table 3: English expressions for the given meanings, along with frequencies from the English Common Crawl web corpus [64]. Example … n-gram counts and language models from the common crawl. In Language Resources and Evaluation …","url":["https://arxiv.org/pdf/2405.12109"]} {"year":"2024","title":"Llama (LLM)","authors":["C Room - algorithms, 2024"],"snippet":"… Data was mostly from CommonCrawl and C4. Llama-2-70B saw 2T tokens but Llama-3-70B saw 15T tokens. Llama-3-8B and Llama-3-70B took 1.3M and 6.4M GPU hours for pretraining. …","url":["https://devopedia.org/llama-llm"]} {"year":"2024","title":"LLaMA Beyond English: An Empirical Study on Language Capability Transfer","authors":["J Zhao, Z Zhang, Q Zhang, T Gui, X Huang - arXiv preprint arXiv:2401.01055, 2024"],"snippet":"In recent times, substantial advancements have been witnessed in large language models (LLMs), exemplified by ChatGPT, showcasing remarkable proficiency across a range of complex tasks. However, many mainstream LLMs (eg LLaMA) are …","url":["https://arxiv.org/pdf/2401.01055"]} {"year":"2024","title":"LLM-Datasets: An Open Framework for Pretraining Datasets of Large Language Models","authors":["M Ostendorff, PO Suarez, LF Lage, G Rehm"],"snippet":"… All of the numbers are given by Common Crawl dump. … 2 show the number of documents by both date of the CommonCrawl dump and language, the percentage of documents removed at each filtering step, and the final document and token …","url":["https://ostendorff.org/assets/pdf/ostendorff2024-preprint.pdf"]} {"year":"2024","title":"LLM2LLM: Boosting LLMs with Novel Iterative Data Enhancement","authors":["N Lee, T Wattanawong, S Kim, K Mangalam, S Shen… - arXiv preprint arXiv …, 2024"],"snippet":"… We assume that we are given an LLM model M (eg, GPT-3.5 or LLaMA2-7B) that is pre-trained on some source dataset (eg, Common Crawl). The goal is to adapt M (hereon called the student model) to a new target domain by using a small seed dataset D …","url":["https://arxiv.org/pdf/2403.15042"]} {"year":"2024","title":"LLMs achieve adult human performance on higher-order theory of mind tasks","authors":["W Street, JO Siy, G Keeling, A Baranes, B Barnett… - arXiv preprint arXiv …, 2024"],"snippet":"This paper examines the extent to which large language models (LLMs) have developed higher-order theory of mind (ToM); the human ability to reason about multiple mental and emotional states in a recursive manner (eg I think that you …","url":["https://arxiv.org/pdf/2405.18870"]} {"year":"2024","title":"LLMs for Low Resource Languages in Multilingual, Multimodal and Dialectal Settings","authors":["F Alam, SA Chowdhury, S Boughorbel, M Hasanain - … of the 18th Conference of the …, 2024"],"snippet":"The recent breakthroughs in Artificial Intelligence (AI) can be attributed to the remarkable performance of Large Language Models (LLMs) across a spectrum of research areas (eg, machine translation, question-answering, automatic speech …","url":["https://aclanthology.org/2024.eacl-tutorials.5.pdf"]} {"year":"2024","title":"LLMs Still Can't Avoid Instanceof: An Investigation Into GPT-3.5, GPT-4 and Bard's Capacity to Handle Object-Oriented Programming Assignments","authors":["BP Cipriano, P Alves - arXiv preprint arXiv:2403.06254, 2024"],"snippet":"… are high-level descriptions of the training datasets (eg a filtered version of Common Crawl 14, English Wikipedia, among others) [3], it was not possible to perform this analysis for those models. 13https:github.com/camilaabrantes/CursoJava …","url":["https://arxiv.org/pdf/2403.06254"]} {"year":"2024","title":"LMaaS: Exploring Pricing Strategy of Large Model as a Service for Communication","authors":["P Wu, Q Liu, Y Dong, F Wang - arXiv preprint arXiv:2401.02675, 2024"],"snippet":"The next generation of communication is envisioned to be intelligent communication, that can replace traditional symbolic communication, where highly condensed semantic information considering both source and channel will be extracted and …","url":["https://arxiv.org/pdf/2401.02675"]} {"year":"2024","title":"Long-input summarization using Large Language Models","authors":["E Järvinen - 2024"],"snippet":"… the colossal and cleaned version of common crawl CNN … T5 was pre-trained on the C4 (The Colossal and Cleaned version of Common Crawl) dataset. C4 contains text from 350M web pages; the dataset is 750 GB. A maximum sequence length of …","url":["https://aaltodoc.aalto.fi/bitstreams/cd47964e-5b5e-4af0-9af4-731970184358/download"]} {"year":"2024","title":"Long-Term Ad Memorability: Understanding & Generating Memorable Ads","authors":["SI Harini, S Singh, Y Kumar, A Bhattacharyya, V Baths…"],"snippet":"Marketers spend billions of dollars on advertisements, but to what end? At purchase time, if customers cannot recognize the brand for which they saw an ad, the money spent on the ad is essentially wasted. Despite its importance in marketing, until now …","url":["https://www.researchgate.net/profile/Yaman-Singla/publication/373642219_Long-Term_Ad_Memorability_Understanding_Generating_Memorable_Ads/links/6645020b0b0d2845743647cc/Long-Term-Ad-Memorability-Understanding-Generating-Memorable-Ads.pdf"]} {"year":"2024","title":"LongWanjuan: Towards Systematic Measurement for Long Text Quality","authors":["K Lv, X Liu, Q Guo, H Yan, C He, X Qiu, D Lin - arXiv preprint arXiv:2402.13583, 2024"],"snippet":"… In the English data, the CommonCrawl domain predominates, accounting for over 50% of the data. Apart from a significant amount of aggregated texts in the CommonCrawl domain, the majority of data in other domains consists of holistic texts …","url":["https://arxiv.org/pdf/2402.13583"]} {"year":"2024","title":"Look Ahead or Look Around? A Theoretical Comparison Between Autoregressive and Masked Pretraining","authors":["Q Zhang, T Du, H Huang, Y Wang, Y Wang - Forty-first International Conference on Machine …"],"snippet":"In recent years, the rise of generative self-supervised learning (SSL) paradigms has exhibited impressive performance across visual, language, and multi-modal domains. While the varied designs of generative SSL objectives lead to distinct properties in …","url":["https://openreview.net/pdf?id=2rPoTgEmjV"]} {"year":"2024","title":"Lory: Fully Differentiable Mixture-of-Experts for Autoregressive Language Model Pre-training","authors":["Z Zhong, M Xia, D Chen, M Lewis - arXiv preprint arXiv:2405.03133, 2024"],"snippet":"… We suspect that segment-level routing may be particularly effective at handling out-of-domain evaluation data (we assume that there is only a very small part of Python code in Commoncrawl). Our analysis in Section 5.4 shows that segment-level routing …","url":["https://arxiv.org/pdf/2405.03133"]} {"year":"2024","title":"M2SA: Multimodal and Multilingual Model for Sentiment Analysis of Tweets","authors":["G Thakkar, S Hakimov, M Tadić - arXiv preprint arXiv:2404.01753, 2024"],"snippet":"… 2020b) is a large multilingual language model trained on 2.5 TB of filtered Common Crawl data containing 100 languages. The model was trained with the Masked Language Modelling (MLM) objective, with 15% of the input words masked …","url":["https://arxiv.org/pdf/2404.01753"]} {"year":"2024","title":"Machine Learning and the Analysis of Culture","authors":["S Mützel, É Ollion - 2024"],"snippet":"The focus of this chapter is on how machine learning (ML) impacts the analysis of culture in sociology. It shows how ML has greatly advanced the analysis of culture, with new tools enabling a massive and fine-grained extraction of information from …","url":["https://osf.io/nvtp2/download"]} {"year":"2024","title":"Machine Translation that Peeks at the Reference","authors":["V Zouhar"],"snippet":"Abstract Machine translation with lexical constraints is a popular research topic, especially for terminology translation. Existing approaches for lexical control in MT are usually complex and not easily applicable to all existing MT toolkits. We propose …","url":["https://vilda.net/papers/mt_peek.pdf"]} {"year":"2024","title":"Machines Do See Color: A Guideline to Classify Different Forms of Racist Discourse in Large Corpora","authors":["DD Gordillo, J Timoneda, SV Vera - arXiv preprint arXiv:2401.09333, 2024"],"snippet":"Current methods to identify and classify racist language in text rely on small-n qualitative approaches or large-n approaches focusing exclusively on overt forms of racist discourse. This article provides a step-by-step generalizable guideline to …","url":["https://arxiv.org/pdf/2401.09333"]} {"year":"2024","title":"MagicLens: Self-Supervised Image Retrieval with Open-Ended Instructions","authors":["K Zhang, Y Luan, H Hu, K Lee, S Qiao, W Chen, Y Su… - arXiv preprint arXiv …, 2024"],"snippet":"… We collect all images with the same URL from Common Crawl2 as a group of images from the same web page for potential pairing. Due to the inevitable noisy images introduced by simple grouping, we remove duplicated, low-resolution, and …","url":["https://arxiv.org/pdf/2403.19651"]} {"year":"2024","title":"MAiDE-up: Multilingual Deception Detection of GPT-generated Hotel Reviews","authors":["O Ignat, X Xu, R Mihalcea - arXiv preprint arXiv:2404.12938, 2024"],"snippet":"Deceptive reviews are becoming increasingly common, especially given the increase in performance and the prevalence of LLMs. While work to date has addressed the development of models to differentiate between truthful and …","url":["https://arxiv.org/pdf/2404.12938"]} {"year":"2024","title":"MaiNLP at SemEval-2024 Task 1: Analyzing Source Language Selection in Cross-Lingual Textual Relatedness","authors":["S Zhou, H Shan, B Plank, R Litschko - arXiv preprint arXiv:2404.02570, 2024"],"snippet":"This paper presents our system developed for the SemEval-2024 Task 1: Semantic Textual Relatedness (STR), on Track C: Cross-lingual. The task aims to detect semantic relatedness of two sentences in a given target language without access to …","url":["https://arxiv.org/pdf/2404.02570"]} {"year":"2024","title":"MambaByte: Token-free Selective State Space Model","authors":["J Wang, T Gangavarapu, JN Yan, AM Rush - arXiv preprint arXiv:2401.13660, 2024"],"snippet":"Token-free language models learn directly from raw bytes and remove the bias of subword tokenization. Operating on bytes, however, results in significantly longer sequences, and standard autoregressive Transformers scale poorly in such settings …","url":["https://arxiv.org/pdf/2401.13660"]} {"year":"2024","title":"MAmmoTH2: Scaling Instructions from the Web","authors":["X Yue, T Zheng, G Zhang, W Chen - arXiv preprint arXiv:2405.03548, 2024"],"snippet":"… We argue that the pre-training corpus (eg, Common Crawl) already contains a vast amount of high-quality instruction data for LLM reasoning. For example, the web corpus contains a large amount of educational materials in the form of instruction-following …","url":["https://arxiv.org/pdf/2405.03548"]} {"year":"2024","title":"MAP-Neo: Highly Capable and Transparent Bilingual Large Language Model Series","authors":["G Zhang, S Qu, J Liu, C Zhang, C Lin, CL Yu, D Pan… - arXiv preprint arXiv …, 2024"],"snippet":"… Inspired by [89], which adopts an iterative pipeline to facilitate the acquisition of large-scale, high-quality data from Common Crawl, we propose to select high-quality data for mathematics, scientific exam synthetic data, and wiki-based content in our Matrix. …","url":["https://arxiv.org/pdf/2405.19327"]} {"year":"2024","title":"Massively Multi-Cultural Knowledge Acquisition & LM Benchmarking","authors":["Y Fung, R Zhao, J Doo, C Sun, H Ji - arXiv preprint arXiv:2402.09369, 2024"],"snippet":"Pretrained large language models have revolutionized many applications but still face challenges related to cultural bias and a lack of cultural commonsense knowledge crucial for guiding cross-culture communication and interactions …","url":["https://arxiv.org/pdf/2402.09369"]} {"year":"2024","title":"MASTERARBEIT/MASTER'S THESIS","authors":["S Jang - 2023"],"snippet":"Today, a huge amount of unstructured data from various sources is available. Natural language processing (NLP) techniques using such data are applied to several tasks such as named entity recognition (NER). This study is designed to …","url":["https://phaidra.univie.ac.at/detail/o:1637602.pdf"]} {"year":"2024","title":"Mastering Transformers: The Journey from BERT to Large Language Models and Stable Diffusion","authors":["S Yıldırım, M Asgari-Chenaghlu - 2024"],"snippet":"Explore transformer-based language models from BERT to GPT, delving into NLP and computer vision tasks, while tackling challenges effectively Key Features Understand the complexity of deep learning architecture and transformers …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=M_wJEQAAQBAJ&oi=fnd&pg=PP1&dq=commoncrawl&ots=avP-WYfjWC&sig=KVeolFlauPbA8bVyuAQyJTiTH4Y"]} {"year":"2024","title":"Materials science in the era of large language models: a perspective","authors":["G Lei, R Docherty, SJ Cooper - arXiv preprint arXiv:2403.06949, 2024"],"snippet":"Large Language Models (LLMs) have garnered considerable interest due to their impressive natural language capabilities, which in conjunction with various emergent properties make them versatile tools in workflows ranging from complex …","url":["https://arxiv.org/pdf/2403.06949"]} {"year":"2024","title":"Matthew Coscia and","authors":["A Weber"],"snippet":"… URL: https://github.com/commoncrawl/ia-web-commons. [9] Image Details - Container Registry - Google Cloud Platform. 2022. …","url":["https://vtechworks.lib.vt.edu/server/api/core/bitstreams/42cd566c-369f-44c1-afff-4cc3e6a2fb00/content"]} {"year":"2024","title":"MDSAA","authors":["IRVG Roque"],"snippet":"In the field of digital marketing, efficient content gap analysis is crucial for developing effective SEO strategies. Traditional approaches to this task are time-consuming, representing a challenge for Organic Performance teams, who must process large …","url":["https://run.unl.pt/bitstream/10362/164588/1/TCDMAA1449.pdf"]} {"year":"2024","title":"Measuring and Improving the Energy Efficiency of Large Language Models Inference","authors":["MF Argerich, M Patiño-Martínez - IEEE Access, 2024"],"snippet":"… 4) RedPajama [42]: another family of models based on Pythia with 7B and 3B parameters, fine-tuned on a dataset with the same name as the model, with over 100B text documents coming from 84 CommonCrawl snapshots. Similar to Dolly …","url":["https://ieeexplore.ieee.org/iel8/6287639/6514899/10549890.pdf"]} {"year":"2024","title":"Medical mT5: An Open-Source Multilingual Text-to-Text LLM for The Medical Domain","authors":["I García-Ferrero, R Agerri, AA Salazar, E Cabrio… - arXiv preprint arXiv …, 2024"],"snippet":"… mT5 was trained using mC4, a 1 Trillion token Common Crawl-based dataset covering 101 languages. The pre-training is based on a masked language modeling “spancorruption” objective, where consecutive spans of input tokens are replaced …","url":["https://arxiv.org/pdf/2404.07613"]} {"year":"2024","title":"Meeting the challenge: A benchmark corpus for automated Urdu meeting summarization","authors":["B Sadia, F Adeeba, S Shams, K Javed - Information Processing & Management, 2024"],"snippet":"Meeting summarization has become crucial as the world is gradually shifting towards remote work. Nowadays, automation of meeting summary generation is really needed in order to minimize the time and effort. The surge in online meetings …","url":["https://www.sciencedirect.com/science/article/pii/S0306457324000943"]} {"year":"2024","title":"MELTing point: Mobile Evaluation of Language Transformers","authors":["S Laskaridis, K Kateveas, L Minto, H Haddadi - arXiv preprint arXiv:2403.12844, 2024"],"snippet":"Transformers have revolutionized the machine learning landscape, gradually making their way into everyday tasks and equipping our computers with ``sparks of intelligence''. However, their runtime requirements have prevented them from being …","url":["https://arxiv.org/pdf/2403.12844"]} {"year":"2024","title":"Memorization and Privacy Risks in Domain-Specific Large Language Models","authors":["X Yang, Z Wen, W Qu, Z Chen, Z Xiang, B Chen, H Yao - ICLR 2024 Workshop on Reliable …"],"snippet":"… Common Crawl3 is a nonprofit organization that provides a large and open web crawl data repository for public use. It collects web pages from the internet every month and stores them on Amazon Web Services. Common Crawl’s … C4 cleanses …","url":["https://openreview.net/pdf?id=KmW8WkCKRx"]} {"year":"2024","title":"Mentions of prejudice in news media–an international comparison","authors":["D Rozado - Journal of Computational Social Science, 2024"],"snippet":"Previous research has identified a post-2010 sharp increase of terms used to denounce prejudice (ie racism, sexism, homophobia, Islamophobia, anti-Semitism, etc.) in US and UK news media content. Here, we extend previous analysis to an …","url":["https://link.springer.com/article/10.1007/s42001-024-00295-2"]} {"year":"2024","title":"Methods Futures Report–Sea, Sky, and Land: Engaging in Solidarity in Endangered Ecologies–4S Conference Nov 2023","authors":["R Meckin - 2024"],"snippet":"This is a report of the 4S 2023 conference. The 4S (Society for the Social Studies of Science) conference 2023 was a large, disciplinarily diverse gathering with over 400 sessions spread over four days. Navigating a particular route through the conference …","url":["https://eprints.ncrm.ac.uk/id/eprint/4948/1/NCRM%20method%20futures%204S%20conference%20report.pdf"]} {"year":"2024","title":"Milestones in Bengali Sentiment Analysis leveraging Transformer-models: Fundamentals, Challenges and Future Directions","authors":["S Sengupta, S Ghosh, P Mitra, TI Tamiti - arXiv preprint arXiv:2401.07847, 2024"],"snippet":"Sentiment Analysis (SA) refers to the task of associating a view polarity (usually, positive, negative, or neutral; or even fine-grained such as slightly angry, sad, etc.) to a given text, essentially breaking it down to a supervised (since we have the view …","url":["https://arxiv.org/pdf/2401.07847"]} {"year":"2024","title":"MiniCPM: Unveiling the Potential of Small Language Models with Scalable Training Strategies","authors":["S Hu, Y Tu, X Han, C He, G Cui, X Long, Z Zheng… - arXiv preprint arXiv …, 2024"],"snippet":"The burgeoning interest in developing Large Language Models (LLMs) with up to trillion parameters has been met with concerns regarding resource efficiency and practical expense, particularly given the immense cost of experimentation. This …","url":["https://arxiv.org/pdf/2404.06395"]} {"year":"2024","title":"Misinformation Resilient Search Rankings with Webgraph-based Interventions","authors":["P Carragher, EM Williams, KM Carley - arXiv preprint arXiv:2404.08869, 2024"],"snippet":"… To this end, we investigate the distribution of changes in CommonCrawl PageRank as a result of our interventions for all domains in the webgraph, not just news domains. We then take the domains whose PR score changes significantly ( 50%) …","url":["https://arxiv.org/pdf/2404.08869"]} {"year":"2024","title":"Mission Critical--Satellite Data is a Distinct Modality in Machine Learning","authors":["E Rolf, K Klemmer, C Robinson, H Kerner - arXiv preprint arXiv:2402.01444, 2024"],"snippet":"Satellite data has the potential to inspire a seismic shift for machine learning -- one in which we rethink existing practices designed for traditional data modalities. As machine learning for satellite data (SatML) gains traction for its real-world impact …","url":["https://arxiv.org/pdf/2402.01444"]} {"year":"2024","title":"MixEval: Deriving Wisdom of the Crowd from LLM Benchmark Mixtures","authors":["J Ni, F Xue, X Yue, Y Deng, M Shah, K Jain, G Neubig… - arXiv preprint arXiv …, 2024"],"snippet":"… In the detection phase, we train open-source LLMs on self-collected data to detect queries in Common Crawl splits. During filtering, we utilize GPT-4 Turbo to exclude non-query sentences. In classification, we categorize the filtered queries by input …","url":["https://arxiv.org/pdf/2406.06565"]} {"year":"2024","title":"MobileVLM: A Fast, Reproducible and Strong Vision Language Assistant for Mobile Devices","authors":["X Chu, L Qiao, X Lin, S Xu, Y Yang, Y Hu, F Wei… - arXiv preprint arXiv …, 2023"],"snippet":"We present MobileVLM, a competent multimodal vision language model (MMVLM) targeted to run on mobile devices. It is an amalgamation of a myriad of architectural designs and techniques that are mobile-oriented, which comprises a set of language …","url":["https://arxiv.org/pdf/2312.16886"]} {"year":"2024","title":"Modeling Caption Diversity in Contrastive Vision-Language Pretraining","authors":["S Lavoie, P Kirichenko, M Ibrahim, M Assran… - arXiv preprint arXiv …, 2024"],"snippet":"There are a thousand ways to caption an image. Contrastive Language Pretraining (CLIP) on the other hand, works by mapping an image and its caption to a single vector -- limiting how well CLIP-like models can represent the diverse ways to describe an …","url":["https://arxiv.org/pdf/2405.00740"]} {"year":"2024","title":"Modeling the Sacred: Considerations when Using Considerations when Using Religious Texts in Natural Language Processing","authors":["B Hutchinson - arXiv preprint arXiv:2404.14740, 2024"],"snippet":"This position paper concerns the use of religious texts in Natural Language Processing (NLP), which is of special interest to the Ethics of NLP. Religious texts are expressions of culturally important values, and machine learned models have a …","url":["https://arxiv.org/pdf/2404.14740"]} {"year":"2024","title":"mOSCAR: A Large-scale Multilingual and Multimodal Document-level Corpus","authors":["M Futeral, A Zebaze, PO Suarez, J Abadji, R Lacroix… - arXiv preprint arXiv …, 2024"],"snippet":"… This motivated us to collect and release the first large-scale multilingual and multimodal document dataset derived from Common Crawl.Our dataset, multimodal OSCAR (mOSCAR), follows the OSCAR initiative [Ortiz Suárez et al., 2019, Abadji et …","url":["https://arxiv.org/pdf/2406.08707"]} {"year":"2024","title":"MoVA: Adapting Mixture of Vision Experts to Multimodal Context","authors":["Z Zong, B Ma, D Shen, G Song, H Shao, D Jiang, H Li… - arXiv preprint arXiv …, 2024"],"snippet":"As the key component in multimodal large language models (MLLMs), the ability of the visual encoder greatly affects MLLM's understanding on diverse image content. Although some large-scale pretrained vision encoders such as vision encoders in …","url":["https://arxiv.org/pdf/2404.13046"]} {"year":"2024","title":"mPLM-Sim: Better Cross-Lingual Similarity and Transfer in Multilingual Pretrained Language Models","authors":["P Lin, C Hu, Z Zhang, AFT Martins, H Schütze - Findings of the Association for …, 2024"],"snippet":"Recent multilingual pretrained language models (mPLMs) have been shown to encode strong language-specific signals, which are not explicitly provided during pretraining. It remains an open question whether it is feasible to employ mPLMs to …","url":["https://aclanthology.org/2024.findings-eacl.20.pdf"]} {"year":"2024","title":"mPLUG-DocOwl 1.5: Unified Structure Learning for OCR-free Document Understanding","authors":["A Hu, H Xu, J Ye, M Yan, L Zhang, B Zhang, C Li… - arXiv preprint arXiv …, 2024"],"snippet":"Structure information is critical for understanding the semantics of text-rich images, such as documents, tables, and charts. Existing Multimodal Large Language Models (MLLMs) for Visual Document Understanding are equipped with text recognition …","url":["https://arxiv.org/pdf/2403.12895"]} {"year":"2024","title":"MS MARCO Web Search: a Large-scale Information-rich Web Dataset with Millions of Real Click Labels","authors":["Q Chen, X Geng, C Rosset, C Buractaon, J Lu, T Shen… - Companion Proceedings of …, 2024"],"snippet":"Recent breakthroughs in large models have highlighted the critical significance of data scale, labels and modals. In this paper, we introduce MS MARCO Web Search, the first large-scale information-rich web dataset, featuring millions of real clicked …","url":["https://dl.acm.org/doi/pdf/10.1145/3589335.3648327"]} {"year":"2024","title":"Multi Class Depression Detection Through Tweets using Artificial Intelligence","authors":["MO Nusrat, W Shahzad, SA Jamal - arXiv preprint arXiv:2404.13104, 2024"],"snippet":"… Typically, GloVe embeddings were pretrained on extensive corpora like Wikipedia or Common Crawl and fine-tuned for specific tasks. 300 LSTM units were used with a dropout rate of 0.4. For depression classification, the dense layer was …","url":["https://arxiv.org/pdf/2404.13104"]} {"year":"2024","title":"Multi-class hate speech detection in the Norwegian language using FAST-RNN and multilingual fine-tuned transformers","authors":["E Hashmi, SY Yayilgan - Complex & Intelligent Systems, 2024"],"snippet":"… FastText, a word representation tool developed by Facebook’s research division, offers both unsupervised and supervised modes and features a comprehensive database of 2 million words from Common Crawl, each represented by 300 …","url":["https://link.springer.com/article/10.1007/s40747-024-01392-5"]} {"year":"2024","title":"Multi-Head Mixture-of-Experts","authors":["X Wu, S Huang, W Wang, F Wei - arXiv preprint arXiv:2404.15045, 2024"],"snippet":"Sparse Mixtures of Experts (SMoE) scales model capacity without significant increases in training and inference costs, but exhibits the following two issues: (1) Low expert activation, where only a small subset of experts are activated for …","url":["https://arxiv.org/pdf/2404.15045"]} {"year":"2024","title":"Multi-News+: Cost-efficient Dataset Cleansing via LLM-based Data Annotation","authors":["J Choi, J Yun, K Jin, YB Kim - arXiv preprint arXiv:2404.09682, 2024"],"snippet":"… an analysis of undesirable content in the common crawl corpus. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2 …","url":["https://arxiv.org/pdf/2404.09682"]} {"year":"2024","title":"Multi-Reference Benchmarks for Russian Grammatical Error Correction","authors":["FP Gomez, A Rozovskaya - Proceedings of the 18th Conference of the European …, 2024"],"snippet":"This paper presents multi-reference benchmarks for the Grammatical Error Correction (GEC) of Russian, based on two existing single-reference datasets, for a total of 7,444 learner sentences from a variety of first language backgrounds. Each …","url":["https://aclanthology.org/2024.eacl-long.76.pdf"]} {"year":"2024","title":"Multi-Stream Keypoint Attention Network for Sign Language Recognition and Translation","authors":["M Guan, Y Wang, G Ma, J Liu, M Sun - arXiv preprint arXiv:2405.05672, 2024"],"snippet":"Sign language serves as a non-vocal means of communication, transmitting information and significance through gestures, facial expressions, and bodily movements. The majority of current approaches for sign language recognition (SLR) …","url":["https://arxiv.org/pdf/2405.05672"]} {"year":"2024","title":"Multi-Task Contrastive Learning for 8192-Token Bilingual Text Embeddings","authors":["I Mohr, M Krimmel, S Sturua, MK Akram, A Koukounas… - arXiv preprint arXiv …, 2024"],"snippet":"… Seeking further diversity and volume, we utilize Common Crawl data to create two types of pairs: one from web page titles and their contents, and another by mining questionanswer pairs from FAQ and support-related pages. Additionally, we pair …","url":["https://arxiv.org/pdf/2402.17016"]} {"year":"2024","title":"Multilingual Diversity Improves Vision-Language Representations","authors":["T Nguyen, M Wallingford, S Santy, WC Ma, S Oh… - arXiv preprint arXiv …, 2024"],"snippet":"… Data We experiment with the medium pool of the DataComp benchmark [14], which consists of 128M image-text pairs randomly sampled from Common Crawl dumps between 2014 and 2022, and deduplicated. Unlike other heavily filtered …","url":["https://arxiv.org/pdf/2405.16915"]} {"year":"2024","title":"Multilingual E5 Text Embeddings: A Technical Report","authors":["L Wang, N Yang, X Huang, L Yang, R Majumder, F Wei - arXiv preprint arXiv …, 2024"],"snippet":"This technical report presents the training methodology and evaluation results of the open-source multilingual E5 text embedding models, released in mid-2023. Three embedding models of different sizes (small / base / large) are provided, offering a …","url":["https://arxiv.org/pdf/2402.05672"]} {"year":"2024","title":"Multilingual Hate Speech Detection: Comparison of Transfer Learning Methods to Classify German, Italian, and Spanish Posts","authors":["J Fillies, MP Hoffmann, A Paschke - 2023 IEEE International Conference on Big Data …, 2023"],"snippet":"… This datasetincludes Wikipedia dumps for all supported languages anda substantial 2.5 terabytes of Common Crawl Data collectedfrom the Internet. The ”RoBERTa” component in the model’sname indicates the adherence to the training procedure …","url":["https://www.computer.org/csdl/proceedings-article/bigdata/2023/10386244/1TUOShUglSU"]} {"year":"2024","title":"Multilingual Model Fine-tuning for Sentiment Analysis","authors":["ML Elrefai, MI Khalil, HM Abbas - … on Intelligent Computing and Information Systems …, 2023"],"snippet":"… A sizable multilingual language model was created using 2.5 TB of filtered CommonCrawl data. One of the outstanding XLMRoBERTa in low-resource languages, Low-resource languages are languages with limited available data for …","url":["https://ieeexplore.ieee.org/abstract/document/10391141/"]} {"year":"2024","title":"Multilingual Models in Neural Machine Translation","authors":["G Yang"],"snippet":"… XGLM [19] uses the decoder-only model architecture similar to that of GPT-3 [6], and was pre-trained on CC100-XL that covers 68 Common Crawl (CC) snapshots and 134 languages. Experiment results show that XGLM (7.5B) outperforms GPT-3 (6.7B) …","url":["http://mi.eng.cam.ac.uk/~wjb31/MPhil_Thesis_Guangyu_Yang.pdf"]} {"year":"2024","title":"MultiLS: A Multi-task Lexical Simplification Framework","authors":["K North, T Ranasinghe, M Shardlow, M Zampieri - arXiv preprint arXiv:2402.14972, 2024"],"snippet":"Lexical Simplification (LS) automatically replaces difficult to read words for easier alternatives while preserving a sentence's original meaning. LS is a precursor to Text Simplification with the aim of improving text accessibility to various target …","url":["https://arxiv.org/pdf/2402.14972"]} {"year":"2024","title":"Multimodal Learning for Visual Question Answering using World Knowledge","authors":["MBA Alhaj"],"snippet":"Navigating the frontier of the Visual Turing Test, this research delves into multimodal learning to bridge the gap between visual perception and linguistic interpretation, a foundational challenge in artificial intelligence. It scrutinizes the integration of visual …","url":["https://huggingface.co/spaces/m7mdal7aj/KB-VQA/resolve/main/Files/Dissertation%20Report.pdf"]} {"year":"2024","title":"Multiscale cascaded domain-based approach for Arabic fake reviews detection in e-commerce platforms","authors":["N Qandos, G Hamad, M Alharbi, S Alturki, W Alharbi… - Journal of King Saud …, 2024"],"snippet":"Fake reviews in e-commerce can lead to customer deception and financial losses. Despite the importance of fake reviews detection, studies for Arabic language are scarce due to the lack of comprehensive datasets. This study addresses this gap by …","url":["https://www.sciencedirect.com/science/article/pii/S1319157824000156"]} {"year":"2024","title":"Multivariate Log-based Anomaly Detection for Distributed Database","authors":["L Zhang, T Jia, M Jia, Y Li, Y Yang, Z Wu - arXiv preprint arXiv:2406.07976, 2024"],"snippet":"… Following this, MultiLog utilizes pre-trained word vectors based on the Common Crawl Corpus with the FastText algorithm[45], which is adept at capturing the inherent relationships among words in natural language. This means each word in a …","url":["https://arxiv.org/pdf/2406.07976"]} {"year":"2024","title":"Musical Word Embedding for Music Tagging and Retrieval","authors":["SH Doh, J Lee, D Jeong, J Nam - arXiv preprint arXiv:2404.13569, 2024"],"snippet":"… We used two publicly available embeddings trained on Common Crawl, a large-scale general word corpus, with the GloVe and skipgram … It demonstrates that word embeddings trained with a general word corpus, such as Common Crawl and Wiki …","url":["https://arxiv.org/pdf/2404.13569"]} {"year":"2024","title":"Myanmar XNLI: Building a Dataset and Exploring Low-resource Approaches to Natural Language Inference with Myanmar","authors":["AK Htet, M Dras - 2024"],"snippet":"Despite dramatic recent progress in NLP, it is still a major challenge to apply Large Language Models (LLM) to low-resource languages. This is made visible in benchmarks such as Cross-Lingual Natural Language Inference (XNLI), a key task …","url":["https://www.researchsquare.com/article/rs-4329843/latest.pdf"]} {"year":"2024","title":"Named Entity Recognition in Italian Lung Cancer Clinical Reports using Transformers","authors":["D Paolo, A Bria, C Greco, M Russano, S Ramella… - 2023 IEEE International …, 2023"],"snippet":"The widespread adoption of electronic health records (EHRs) offers a valuable opportunity to support clinical research by containing crucial patient information, including diagnoses, symptoms, medications, lab tests, and more. Despite the …","url":["https://ieeexplore.ieee.org/abstract/document/10385778/"]} {"year":"2024","title":"NATALITY AT RISK? RAISING DOUBTS ON THE EDUCATIONAL IMPORTANCE OF ChatGPT","authors":["P Rojahn - Colloquium, 2024"],"snippet":"Digital tools are part of our daily life. Thus, they also enter educational contexts. With reference to Hannah Arendt’s writings this paper explores if, and to what extent, digital tools can be considered to be helpful and desirable in education. For this …","url":["https://colloquium.elsite.eu/index.php/colloquium/article/download/908/527"]} {"year":"2024","title":"Natural Language Processing Advancements: Breaking Barriers in Human-Computer Interaction","authors":["JGC Ramírez - Journal of Artificial Intelligence General science (JAIGS …, 2024"],"snippet":"Natural Language Processing (NLP) advancements have revolutionized human-computer interaction, breaking barriers and opening new frontiers in technology. NLP techniques enable machines to understand, interpret, and generate human …","url":["https://ojs.boulibrary.com/index.php/JAIGS/article/download/63/35"]} {"year":"2024","title":"Natural Language Processing for Dialects of a Language: A Survey","authors":["A Joshi, R Dabre, D Kanojia, Z Li, H Zhan, G Haffari… - arXiv preprint arXiv …, 2024"],"snippet":"State-of-the-art natural language processing (NLP) models are trained on massive training corpora, and report a superlative performance on evaluation datasets. This survey delves into an important attribute of these datasets: the dialect of a language …","url":["https://arxiv.org/pdf/2401.05632"]} {"year":"2024","title":"Natural Language Processing Journal","authors":["A Caesar, O Özdemir, C Weber, S Wermter"],"snippet":"Natural language processing and vision tasks have recently seen large improvements through the rise of Transformer architectures. The high-performing large language models (LLMs) benefit from large textual datasets that are …","url":["https://www2.informatik.uni-hamburg.de/wtm/publications/2024/COWW24a/1-s2.0-S2949719124000207-main.pdf"]} {"year":"2024","title":"Natural Language Processing Methods for Symbolic Music Generation and Information Retrieval: a Survey","authors":["DVT Le, L Bigo, M Keller, D Herremans - arXiv preprint arXiv:2402.17467, 2024"],"snippet":"Several adaptations of Transformers models have been developed in various domains since its breakthrough in Natural Language Processing (NLP). This trend has spread into the field of Music Information Retrieval (MIR), including studies …","url":["https://arxiv.org/pdf/2402.17467"]} {"year":"2024","title":"Natural Language Processing Tools for Romanian–Going Beyond a Low-Resource Language","authors":["M Nitu, M Dascalu"],"snippet":"… Oscar (Open Super-large Crawled Aggregated coRpus) [64] is a multilingual dataset obtained via filtering the common crawl corpus. It encompasses 151 languages. The Romanian sub-corpus contains over 4,5 million documents …","url":["https://www.researchgate.net/profile/Nitu-Melania-2/publication/380779085_Natural_Language_Processing_Tools_for_Romanian_-_Going_Beyond_a_Low-Resource_Language/links/664efa81479366623a08548f/Natural-Language-Processing-Tools-for-Romanian-Going-Beyond-a-Low-Resource-Language.pdf"]} {"year":"2024","title":"Natural Language Processing","authors":["CH Focke, RM Sylvester - Statistical Methods in Epilepsy, 2024"],"snippet":"In this chapter, we introduce modern natural language processing (NLP) libraries, techniques and their applications. This chapter focuses on deep learning methods and less on computational linguistics that require nuanced knowledge of linguistics …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=x1QIEQAAQBAJ&oi=fnd&pg=PA302&dq=commoncrawl&ots=nunktBqRF2&sig=V-n3c25TaVqCLY-Hs7tRdfKtPBI"]} {"year":"2024","title":"Navigating Challenges and Technical Debt in Large Language Models Deployment","authors":["A Menshawy, Z Nawaz, M Fahmy - Proceedings of the 4th Workshop on Machine …, 2024"],"snippet":"… These models, trained on an internet scale data including common crawl, books, and articles, have acquired an extensive understanding of language patterns, nuances, and contexts. This profound comprehension allows LLMs to perform a …","url":["https://dl.acm.org/doi/abs/10.1145/3642970.3655840"]} {"year":"2024","title":"Navigating the ethical landscape behind ChatGPT","authors":["L Peng, B Zhao - Big Data & Society, 2024"],"snippet":"… While ChatGPT and similar text-generating AIs claim to be trained on mostly public data such as the Common Crawl archives, rendering … content in public datasets used for AI training, such as the Common Crawl, which is beyond the …","url":["https://journals.sagepub.com/doi/pdf/10.1177/20539517241237488"]} {"year":"2024","title":"Navigating the Legal Landscape: Technical Implementation of Copyright Reservations for Text and Data Mining in the Era of AI Language Models","authors":["L Löbling, C Handschigl, K Hofmann, J Schwedhelm - JIPITEC–Journal of Intellectual …, 2023"],"snippet":"… However, limitations arose from the potential omission of specific pages of a website in individual monthly crawls by Common Crawl. Through manual investigation, a section containing the terms of use could be identified for 40 …","url":["https://www.jipitec.eu/jipitec/article/download/16/11"]} {"year":"2024","title":"Near to Mid-term Risks and Opportunities of Open Source Generative AI","authors":["F Eiras, A Petrov, B Vidgen, CS de Witt, F Pizzati… - arXiv preprint arXiv …, 2024"],"snippet":"In the next few years, applications of Generative AI are expected to revolutionize a number of different areas, ranging from science & medicine to education. The potential for these seismic changes has triggered a lively debate about potential …","url":["https://arxiv.org/pdf/2404.17047"]} {"year":"2024","title":"Need a Programming Exercise Generated in Your Native Language? ChatGPT's Got Your Back: Automatic Generation of Non-English Programming Exercises Using …","authors":["M Jordan, K Ly, AG Soosai Raj - Proceedings of the 55th ACM Technical Symposium …, 2024"],"snippet":"Large language models (LLMs) like ChatGPT are changing computing education and may create additional barriers to those already faced by non-native English speakers (NNES) learning computing. We investigate an opportunity for a positive …","url":["https://dl.acm.org/doi/pdf/10.1145/3626252.3630897"]} {"year":"2024","title":"Nemotron-4 15B Technical Report","authors":["J Parmar, S Prabhumoye, J Jennings, M Patwary… - arXiv preprint arXiv …, 2024"],"snippet":"We introduce Nemotron-4 15B, a 15-billion-parameter large multilingual language model trained on 8 trillion text tokens. Nemotron-4 15B demonstrates strong performance when assessed on English, multilingual, and coding tasks: it …","url":["https://arxiv.org/pdf/2402.16819"]} {"year":"2024","title":"Neural Approaches to Decode Semantic Similarities in Spanish Song Lyrics for Enhanced Recommendation Systems","authors":["A Ghajari Espinosa - 2024"],"snippet":"… In this model, it was found that traditional methods of pre-training, which rely on large amounts of data from sources such as Common Crawl… The mC4 dataset is a multilingual variant of C4, containing natural text in 101 languages from the public …","url":["http://e-spacio.uned.es/fez/eserv/bibliuned:master-ETSInformatica-TL-Aghajari/Ghajari_Espinosa_Adrian_TFM.pdf"]} {"year":"2024","title":"NEURAL MODELS FOR ONTOLOGY ANNOTATIONS-NEMO","authors":["P Devkota, SD Mohanty"],"snippet":"Words cannot express my gratitude to my advisors Dr. Somya D. Mohanty and Dr. Prashanti Manda for their invaluable supervision and guidance. I am extremely grateful to them for their continuous support, constructive suggestions, and …","url":["https://libres.uncg.edu/ir/uncg/f/Devkota_uncg_0154M_13725.pdf"]} {"year":"2024","title":"Neural Network-based Language Modelling for Code-Switching in South African Languages","authors":["JJ van Vüren"],"snippet":"Language models, specifically those based on neural network architectures, have become an attractive alternative to classical statistical language models for natural language processing and speech recognition. Naturally, therefore, the field of neural …","url":["https://scholar.sun.ac.za/bitstreams/481dc105-b6b9-402b-9e68-f600dbc56745/download"]} {"year":"2024","title":"Neural Networks with Model Compression","authors":["B Zhang, T Wang, S Xu, D Doermann - 2024"],"snippet":"… Firstly, the availability of large-scale datasets, such as ImageNet for computer vision or the Common Crawl dataset for natural language processing, has enabled the training of deep neural networks with millions or even billions of parameters …","url":["https://link.springer.com/content/pdf/10.1007/978-981-99-5068-3.pdf"]} {"year":"2024","title":"No Filter: Cultural and Socioeconomic Diversityin Contrastive Vision-Language Models","authors":["A Pouget, L Beyer, E Bugliarello, X Wang, AP Steiner… - arXiv preprint arXiv …, 2024"],"snippet":"We study cultural and socioeconomic diversity in contrastive vision-language models (VLMs). Using a broad range of benchmark datasets and evaluation metrics, we bring to attention several important findings. First, the common filtering of training …","url":["https://arxiv.org/pdf/2405.13777"]} {"year":"2024","title":"Non-governmental Governance of Trust on the Internet: WebPKI as Public Good","authors":["K Grindal, M Mueller, V Srivastava"],"snippet":"… The population of sites we sampled from was indexed by the non-profit foundation Common Crawl. We then employed a script to pull certificate information from our list of URLs. Slightly more than half the sample did not return certificate …","url":["https://weis.utdallas.edu/files/2024/03/Grindal-et-al-WEIS-2024-0836cad1909f1424.pdf"]} {"year":"2024","title":"Not All Countries Celebrate Thanksgiving: On the Cultural Dominance in Large Language Models","authors":["ECF German"],"snippet":"This paper identifies a cultural dominance issue within large language models (LLMs) due to the predominant use of English data in model training (eg, ChatGPT). LLMs often provide inappropriate English-culture-related answers that are not relevant to …","url":["https://openreview.net/pdf?id=Og0zmSOFEI0"]} {"year":"2024","title":"Notes towards infrastructure governance for large language models","authors":["L Dal Molin - First Monday, 2024"],"snippet":"This paper draws on information infrastructures (IIs) in science and technology studies (STS), as well as on feminist STS scholarship and contemporary critical accounts of digital technologies, to build an initial mapping of the infrastructural …","url":["https://firstmonday.org/ojs/index.php/fm/article/download/13567/11409"]} {"year":"2024","title":"NSINA: A News Corpus for Sinhala","authors":["H Hettiarachchi, D Premasiri, L Uyangodage… - arXiv preprint arXiv …, 2024"],"snippet":"The introduction of large language models (LLMs) has advanced natural language processing (NLP), but their effectiveness is largely dependent on pre-training resources. This is especially evident in low-resource languages, such as Sinhala …","url":["https://arxiv.org/pdf/2403.16571"]} {"year":"2024","title":"Numerical Claim Detection in Finance: A New Financial Dataset, Weak-Supervision Model, and Market Analysis","authors":["A Shah, A Hiray, P Shah, A Banerjee, A Singh… - arXiv preprint arXiv …, 2024"],"snippet":"In this paper, we investigate the influence of claims in analyst reports and earnings calls on financial market returns, considering them as significant quarterly events for publicly traded companies. To facilitate a comprehensive analysis, we construct a …","url":["https://arxiv.org/pdf/2402.11728"]} {"year":"2024","title":"NusaBERT: Teaching IndoBERT to be Multilingual and Multicultural","authors":["W Wongso, DS Setiawan, S Limcorn, A Joyoadikusumo - arXiv preprint arXiv …, 2024"],"snippet":"… We also note that NusaBERT was pre-trained on Wikipedia and common crawl corpora, which explains its effectiveness on and closeness to NusaX and NusaTranslation source domains, but not so for NusaParagraph. Thus, due to their high linguistic and …","url":["https://arxiv.org/html/2403.01817v1"]} {"year":"2024","title":"Nyonic Technical Report","authors":["J Tian, R Wang, C Li, Y Zhou, J Liu, J Wang - arXiv preprint arXiv:2404.15702, 2024"],"snippet":"… Our curated dataset is specifically designed to meet these criteria, incorporating sources such as the Common Crawl, books, Wikipedia, code, and … Our data sources encompass Common Crawl, Wiki, and various coding repositories. The final …","url":["https://arxiv.org/pdf/2404.15702"]} {"year":"2024","title":"Ocean Decade Vision 2030 White Papers–Challenge 8: Create a digital representation of the ocean.","authors":["JB Calewaert, PC Sierra-Correa, O McMeel… - 2024"],"snippet":"… The Common crawl and WebImageText are examples, with communities such as Hugging Face and KAGGLE creating new communities of practice16F … 17 https://commoncrawl.org, https://github.com/google-research-datasets/wit, https://huggingface.co, https://www.kaggle.com …","url":["https://aquadocs.org/bitstream/handle/1834/43137/WP8_v.2.0.docx.pdf?sequence=1&isAllowed=y"]} {"year":"2024","title":"OLMo: Accelerating the Science of Language Models","authors":["D Groeneveld, I Beltagy, P Walsh, A Bhagia, R Kinney… - arXiv preprint arXiv …, 2024"],"snippet":"… well on evaluations predominated by Common Crawl, such as C4, though different ways of postprocessing Common Crawl are best fit by … The RedPajama evaluation shows a similar pattern, perhaps as only 2 of its 7 domains are from …","url":["https://arxiv.org/pdf/2402.00838"]} {"year":"2024","title":"Omniview-Tuning: Boosting Viewpoint Invariance of Vision-Language Pre-training Models","authors":["S Ruan, Y Dong, H Liu, Y Huang, H Su, X Wei - arXiv preprint arXiv:2404.12139, 2024"],"snippet":"Vision-Language Pre-training (VLP) models like CLIP have achieved remarkable success in computer vision and particularly demonstrated superior robustness to distribution shifts of 2D images. However, their robustness under 3D viewpoint …","url":["https://arxiv.org/pdf/2404.12139"]} {"year":"2024","title":"On Labs and Fabs: Mapping How Alliances, Acquisitions, and Antitrust are Shaping the Frontier AI Industry","authors":["T Aguirre - arXiv preprint arXiv:2406.01722, 2024"],"snippet":"As frontier AI models progress, policy proposals for safe AI development are gaining increasing attention from researchers and policymakers. This paper evaluates the present landscape of integration within the AI supply chain, emphasizing vertical …","url":["https://arxiv.org/pdf/2406.01722"]} {"year":"2024","title":"On Protecting the Data Privacy of Large Language Models (LLMs): A Survey","authors":["B Yan, K Li, M Xu, Y Dong, Y Zhang, Z Ren, X Cheng - arXiv preprint arXiv …, 2024"],"snippet":"… For example, GPT-3, developed by OpenAI, was pre-trained using CommonCrawl, constituting 45TB of compressed plaintext before filtering [37]. Regarding multimodal LLMs, CLIP’s training dataset encompasses 400 million pairs of images and text …","url":["https://arxiv.org/pdf/2403.05156"]} {"year":"2024","title":"On the data scarcity problem of neural-based named entity recognition","authors":["R Zhou - 2023"],"snippet":"… The initial release trained on Wikipedia includes word representations for nine languages, namely Arabic, Czech, German, English, Spanish, French, Italian, Romanian and Russian, and has quickly expanded to 157 languages by training on larger …","url":["https://dr.ntu.edu.sg/bitstream/10356/173481/2/Thesis_ZhouRan_final.pdf"]} {"year":"2024","title":"On the evaluation and application of neural language models for grammatical error detection","authors":["C Davis - 2024"],"snippet":"Neural language models (NLM) have become a core component in many downstream applications within the field of natural language processing, including the task of data-driven automatic grammatical error detection (GED). This thesis …","url":["https://www.repository.cam.ac.uk/bitstreams/f9ca765c-f10e-4b64-8c10-3d3b7d38810d/download"]} {"year":"2024","title":"On the importance of Data Scale in Pretraining Arabic Language Models","authors":["A Ghaddar, P Langlais, M Rezagholizadeh, B Chen - arXiv preprint arXiv:2401.07760, 2024"],"snippet":"Pretraining monolingual language models have been proven to be vital for performance in Arabic Natural Language Processing (NLP) tasks. In this paper, we conduct a comprehensive study on the role of data in Arabic Pretrained Language …","url":["https://arxiv.org/pdf/2401.07760"]} {"year":"2024","title":"On the Question of Authorship in Large Language Models","authors":["C Soos, L Haroutunian - KO KNOWLEDGE ORGANIZATION, 2024"],"snippet":"… CommonCrawl exemplifies the problem of documentation debt. The dataset is an effort by The Common Crawl Foundation to “[democratize] access to web information by producing and maintaining an open repository of web crawl data” (Common …","url":["https://www.nomos-elibrary.de/10.5771/0943-7444-2024-2-83.pdf"]} {"year":"2024","title":"On the Tip of the Tongue: Analyzing Conceptual Representation in Large Language Models with Reverse-Dictionary Probe","authors":["N Xu, Q Zhang, M Zhang, P Qian, X Huang - arXiv preprint arXiv:2402.14404, 2024"],"snippet":"Probing and enhancing large language models' reasoning capacity remains a crucial open question. Here we re-purpose the reverse dictionary task as a case study to probe LLMs' capacity for conceptual inference. We use in-context learning …","url":["https://arxiv.org/pdf/2402.14404"]} {"year":"2024","title":"One Law, Many Languages: Benchmarking Multilingual Legal Reasoning for Judicial Support","authors":["V Rasiah, R Stern, V Matoshi, M Stürmer, I Chalkidis…"],"snippet":"Recent strides in Large Language Models (LLMs) have saturated many NLP benchmarks (even professional domain-specific ones), emphasizing the need for more challenging ones to properly assess LLM capabilities. Domain-specific and …","url":["https://openreview.net/pdf?id=7vkz7cKd1X"]} {"year":"2024","title":"Online health search via multi-dimensional information quality assessment based on deep language models","authors":["B Zhang, N Naderi, R Mishra, D Teodoro - 2024"],"snippet":"Background Widespread misinformation in Web resources can lead to serious implications for individuals seeking health advice. Despite that, information retrieval models are often focused only on the query-document relevance dimension to rank …","url":["https://hal.science/hal-04419996/"]} {"year":"2024","title":"Online Health Search Via Multidimensional Information Quality Assessment Based on Deep Language Models: Algorithm Development and Validation","authors":["B Zhang, N Naderi, R Mishra, D Teodoro - JMIR AI, 2024"],"snippet":"… In this study, we simulated online health information search scenarios with a topic set of 32 different health-related inquiries and a corpus containing 1 billion web documents from the April 2019 snapshot of Common Crawl. Using state-of-the-art …","url":["https://ai.jmir.org/2024/1/e42630"]} {"year":"2024","title":"Ontology Completion with Natural Language Inference and Concept Embeddings: An Analysis","authors":["N Li, T Bailleux, Z Bouraoui, S Schockaert - arXiv preprint arXiv:2403.17216, 2024"],"snippet":"We consider the problem of finding plausible knowledge that is missing from a given ontology, as a generalisation of the well-studied taxonomy expansion task. One line of work treats this task as a Natural Language Inference (NLI) problem, thus relying …","url":["https://arxiv.org/html/2403.17216v1"]} {"year":"2024","title":"Open Assistant Toolkit--version 2","authors":["S Fischer, F Rossetto, C Gemmell, A Ramsay, I Mackie… - arXiv preprint arXiv …, 2024"],"snippet":"… To source knowledge and tasks, we release a new offline pipeline to parse and augment task data from CommonCrawl. The offline pipeline supports using LLMs and additional open-source multimodal data sources to enhance tasks and make …","url":["https://arxiv.org/pdf/2403.00586"]} {"year":"2024","title":"Open Science at the Generative AI Turn: An Exploratory Analysis of Challenges and Opportunities","authors":["M Hosseini, SPJM Horbach, K Holmes…"],"snippet":"Technology influences Open Science (OS) practices, because conducting science in transparent, accessible, and participatory ways requires tools/platforms for collaborative research and sharing results. Due to this direct relationship …","url":["https://osf.io/zns7g/download"]} {"year":"2024","title":"OpenMoE: An Early Effort on Open Mixture-of-Experts Language Models","authors":["F Xue, Z Zheng, Y Fu, J Ni, Z Zheng, W Zhou, Y You - arXiv preprint arXiv:2402.01739, 2024"],"snippet":"To help the open-source community have a better understanding of Mixture-of-Experts (MoE) based large language models (LLMs), we train and release OpenMoE, a series of fully open-sourced and reproducible decoder-only MoE LLMs, ranging from …","url":["https://arxiv.org/pdf/2402.01739"]} {"year":"2024","title":"OptiComm-GPT: a GPT-based versatile research assistant for optical fiber communication systems","authors":["X Jiang, M Zhang, Y Song, Y Zhang, Y Wang, C Ju… - Optics Express, 2024"],"snippet":"… Generally, LLMs are pre-trained on publicly available datasets such as BooksCorpus, Common Crawl, and Wikipedia, thereby ensuring comprehensive coverage of knowledge. Meanwhile, with the support of powerful analysis, reasoning …","url":["https://opg.optica.org/oe/fulltext.cfm?uri=oe-32-12-20776"]} {"year":"2024","title":"Optimization and Evaluation in Machine Learning Challenges","authors":["J Pokrywka"],"snippet":"To develop new machine learning methods, it is necessary to evaluate them reliably. This doctoral thesis discusses some aspects of preparing machine learning challenges and techniques for developing their solutions. The work consists of …","url":["https://repozytorium.amu.edu.pl/bitstreams/7a38c2a1-c992-4fa5-9ece-1c73f1024154/download"]} {"year":"2024","title":"Optimization of news dissemination push mode by intelligent edge computing technology for deep learning","authors":["JL DeGe, S Sang - Scientific Reports, 2024"],"snippet":"… Realnew (https://paperswithcode.com/dataset/realnews) is a large corpus of news articles from CommonCrawl. The data is crawled from CommonCrawl, which is limited to 5000 news fields of Google News Index. The author uses the …","url":["https://www.nature.com/articles/s41598-024-53859-7"]} {"year":"2024","title":"Optimization of Unsupervised Neural Machine Translation Based on Syntactic Knowledge Improvement","authors":["A Zhou - Optimization, 2023"],"snippet":"… Common Crawl is a multilingual aligned dataset based on web crawling. Para Crawl is a multilingual parallel corpus for large-scale web crawling. As shown in Table III, the detection accuracies of LSTM in Newstest2020, Newstest2021 …","url":["https://search.proquest.com/openview/b003a8244f2be3b8735b21ff0421e7d0/1?pq-origsite=gscholar&cbl=5444811"]} {"year":"2024","title":"Optimizing Data I/O for LLM Datasets on Remote Storage","authors":["T Zhong, J Zhao, X Guo, Q Su, G Fox"],"snippet":"Training large language models (LLMs) demands increasingly larger datasets for optimal performance [13]. In practice, these datasets may include hundreds of terabytes (TB) or even petabytes of data. For example, Common Crawl [3] needs …","url":["https://luosuu.github.io/assets/files/AIOps_24.pdf"]} {"year":"2024","title":"Optimizing Language Augmentation for Multilingual Large Language Models: A Case Study on Korean","authors":["CS Choi, Y Jeong, S Park, IH Won, HS Lim, SM Kim… - arXiv preprint arXiv …, 2024"],"snippet":"… Llama2 is a multilingual language model trained using largescale publicly available data (including CommonCrawl, Github, Wikipedia, and ArXiv) for more than 28 languages. Thus, it possesses cross-language understanding capabilities …","url":["https://arxiv.org/pdf/2403.10882"]} {"year":"2024","title":"ORCID","authors":["P Gries"],"snippet":"How are Asian and Black men and women stereotyped? Research from the gendered race and stereotype content perspectives has produced mixed empirical findings. Using BERT models pre-trained on English language books, news articles …","url":["https://psychbruce.github.io/paper/Bao_Accepted_BJSP_FMAT_Stereotype_Manuscript.pdf"]} {"year":"2024","title":"Organic Data-Driven Approach for Turkish Grammatical Error Correction and LLMs","authors":["A Ersoy, OT Yıldız - arXiv preprint arXiv:2405.15320, 2024"],"snippet":"… 2020) which contains random paragraphs sampled from the CommonCrawl dataset3. Some studies put some effort into filling the gap and … 2020), a multilingual pre-trained encoder-decoder text-to-text transformer trained on a Common Crawl-based …","url":["https://arxiv.org/pdf/2405.15320"]} {"year":"2024","title":"Outlier Detection in Serbian CommonCrawl Data","authors":["V Kalušev, D Ćulibrk - 2024 23rd International Symposium INFOTEH …, 2024"],"snippet":"The surge in large language models (LLMs) has greatly advanced natural language processing. However, their development is often hampered by the limited availability and quality of training datasets, particularly for underrepresented languages. Our …","url":["https://ieeexplore.ieee.org/abstract/document/10495987/"]} {"year":"2024","title":"Outsmarting Artificial Intelligence in the Classroom—Incorporating Large Language Model-Based Chatbots into Teaching","authors":["J Wutzler - Issues in Accounting Education, 2024"],"snippet":"Since the release of ChatGPT in November 2022, large language model-based chatbots have attracted much attention. Although businesses value their potential for efficiency gains, academics are concerned about their effects on learning and …","url":["https://publications.aaahq.org/iae/article/doi/10.2308/ISSUES-2023-064/12560"]} {"year":"2024","title":"Overview of Existing LLM Families","authors":["A Kucharavy - Large, 2024"],"snippet":"… data that are considered as redundant—namely the Common Crawl and Colossal Cleaned Common Crawl (C4). Researchers who created the … to Common Crawl when it came to training LLMs [30] and should be used instead of the whole …","url":["https://link.springer.com/content/pdf/10.1007/978-3-031-54827-7.pdf#page=47"]} {"year":"2024","title":"Overview of the TREC 2023 NeuCLIR Track (Notebook Paper)","authors":["D Lawrie, S MacAvaney, J Mayfield, P McNamee…"],"snippet":"… For more information about extracting the text for the documents from CommonCrawl News and how the collection can be obtained, see … of the track including HC4 — a CLIR collection built over three years of Common Crawl data in …","url":["https://terpconnect.umd.edu/~oard/pdf/trec23neuclir-notebook.pdf"]} {"year":"2024","title":"Pack of LLMs: Model Fusion at Test-Time via Perplexity Optimization","authors":["C Mavromatis, P Karypis, G Karypis - arXiv preprint arXiv:2404.11531, 2024"],"snippet":"… 2019) is sa publicly available distribution of a Common Crawl snapshot. Following cBTM, we use 168B tokens of the no blocklist version (en.noblocklist) that is out of distribution to the OPT model’s pretraining corpus. S2ORC (Lo et al.…","url":["https://arxiv.org/pdf/2404.11531"]} {"year":"2024","title":"PAL: Proxy-Guided Black-Box Attack on Large Language Models","authors":["C Sitawarin, N Mu, D Wagner, A Araujo - arXiv preprint arXiv:2402.09674, 2024"],"snippet":"Large Language Models (LLMs) have surged in popularity in recent months, but they have demonstrated concerning capabilities to generate harmful content when manipulated. While techniques like safety fine-tuning aim to minimize harmful use …","url":["https://arxiv.org/pdf/2402.09674"]} {"year":"2024","title":"Parameter-Efficient Tuning of Large Convolutional Models","authors":["W Chen, Z Miao, Q Qiu - arXiv preprint arXiv:2403.00269, 2024"],"snippet":"To address the high computational and parameter complexity associated with fine-tuning large pre-trained models, researchers have developed parameter-efficient methods, where only partial parameters are updated for downstream tasks. However, these …","url":["https://arxiv.org/pdf/2403.00269"]} {"year":"2024","title":"Paraphrase Generation and Supervised Learning for Improved Automatic Short Answer Grading","authors":["L Ouahrani, D Bennouar - International Journal of Artificial Intelligence in …, 2024"],"snippet":"We consider the reference-based approach for Automatic Short Answer Grading (ASAG) that involves scoring a textual constructed student answer comparing to a teacher-provided reference answer. The reference answer does not cover the variety of student …","url":["https://link.springer.com/article/10.1007/s40593-023-00391-w"]} {"year":"2024","title":"Parsability revisited and reassessed","authors":["S Monakhov - Journal of Linguistics, 2023"],"snippet":"This paper provides evidence that the inveterate way of assessing linguistic items’ degrees of analysability by calculating their derivation to base frequency ratios may obfuscate the difference between two meaning processing models: one based on …","url":["https://www.cambridge.org/core/services/aop-cambridge-core/content/view/896CFA4F3C89952AF52611DCBCD262B3/S0022226723000385a.pdf/parsability-revisited-and-reassessed.pdf"]} {"year":"2024","title":"Part-of-Speech Tagger for Bodo Language using Deep Learning approach","authors":["D Pathak, S Narzary, S Nandi, B Som - arXiv preprint arXiv:2401.03175, 2024"],"snippet":"Language Processing systems such as Part-of-speech tagging, Named entity recognition, Machine translation, Speech recognition, and Language modeling (LM) are well-studied in high-resource languages. Nevertheless, research on these …","url":["https://arxiv.org/pdf/2401.03175"]} {"year":"2024","title":"Pathological Liars: Algorithmic Knowing in the Rhetorical Ecosystem of Wallstreetbets","authors":["MH Yang, ZP Majdik - Rhetoric Society Quarterly, 2024"],"snippet":"This essay demonstrates the value of using artificial intelligence (AI) technologies to address specific kinds of research questions in rhetoric. The essay builds on a study of a novel rhetorical object first observed by Yang on the Reddit subreddit r/wallstreetbets …","url":["https://www.tandfonline.com/doi/abs/10.1080/02773945.2024.2343616"]} {"year":"2024","title":"Pathological-Llama: an Explainable Medical Visual Question An","authors":["S Nguyen - 2024"],"snippet":"This thesis introduces Pathological-Llama, an explainable medical visual question answering system that integrates computer vision and natural language processing to accurately interpret medical images through a generative task approach. By …","url":["https://www.zhaw.ch/storage/engineering/institute-zentren/cai/studentische_arbeiten/Herbst_2023/MSE__Master_Thesis_23_bogo_PathologicalLlama.pdf"]} {"year":"2024","title":"Pedro L. Rodriguez Arthur Spirling Brandon M. Stewart Elisa M. Wirsching","authors":["BM Stewart"],"snippet":"… from Common Crawl and Wikipedia. A strength of the fastText model is that it uses subword … On inspection, we saw that Common Crawl includes many typos and rare terms (plus many … Beyond this potential for noise, Common Crawl is not …","url":["https://alcembeddings.org/assets/img/RSSW_paper_january_2024.pdf"]} {"year":"2024","title":"Performance Comparison of Large Language Models, GPT and Gemini on Turkish News Classification Task","authors":["ZA Guven - 2024"],"snippet":"… This was achieved by training a Transformer-based masked language model on 100 languages using more than two terabytes of filtered CommonCrawl data. The balance between high and low-resource languages and the impact of language …","url":["https://www.researchsquare.com/article/rs-4351735/latest.pdf"]} {"year":"2024","title":"Performance Evaluation of Word Embedding Algorithms","authors":["S Kulshretha, L Lodha - Performance Evaluation, 2023"],"snippet":"This study intends to explore the field of word embedding and thoroughly examine and contrast various word embedding algorithms. Words retain their semantic relationships and meaning when they are transformed into vectors using word …","url":["https://ijisrt.com/assets/upload/files/IJISRT23DEC1110.pdf"]} {"year":"2024","title":"Performance of ChatGPT on American Board of Surgery In-Training Examination Preparation Questions","authors":["CG Tran, J Chang, SK Sherman, JP De Andrade - Journal of Surgical Research, 2024"],"snippet":"… The training dataset of ChatGPT 3.5, where the artificial intelligence (AI) “knowledge” is derived, is comprised of Common Crawl (a monthly updated web archive of data), books, journals, and Wikipedia and totals over 500 billion encoded “tokens.” Many …","url":["https://www.sciencedirect.com/science/article/pii/S0022480424002294"]} {"year":"2024","title":"Perplexed by Perplexity: Perplexity-Based Data Pruning With Small Reference Models","authors":["Z Ankner, C Blakeney, K Sreenivasan, M Marion… - arXiv preprint arXiv …, 2024"],"snippet":"… Secondly, while previous work only investigates pruning on datasets composed of just one domain (CommonCrawl1), we consider two … dataset, with 81.31% of its data being derived from the CommonCrawl. We find that successful pruning …","url":["https://arxiv.org/pdf/2405.20541"]} {"year":"2024","title":"Personal Science Guide to AI","authors":["R Sprague - 2024"],"snippet":"Imagine you have access to a zillion documents, preferably curated in some way reassures you about their quality and consistency. Wikipedia, for example, or maybe Reddit and other posts that have been sufficiently up-voted. Maybe you also have a …","url":["https://ai.personalscience.com/PS-AI/Book/psai.pdf"]} {"year":"2024","title":"Personality trait detection via transfer learning","authors":["B Alshouha, J Serrano-Guerrero, F Chiclana… - Comput Mater Continua …, 2024"],"snippet":"Personality recognition plays a pivotal role when developing user-centric solutions such as recommender systems or decision support systems across various domains, including education, e-commerce, or human resources. Traditional machine …","url":["https://cdn.techscience.cn/files/cmc/2024/TSP_CMC-78-2/TSP_CMC_46711/TSP_CMC_46711.pdf"]} {"year":"2024","title":"Perspectives of Global and Hong Kong's Media on China's Belt and Road Initiative","authors":["LC Khoo, A Datta - arXiv preprint arXiv:2312.17013, 2023"],"snippet":"This study delves into the media analysis of China's ambitious Belt and Road Initiative (BRI), which, in a polarized world, and furthermore, owing to the very polarizing nature of the initiative itself, has received both strong criticisms and …","url":["https://arxiv.org/pdf/2312.17013"]} {"year":"2024","title":"PHISHING DETECTION SYSTEM THROUGH MACHINE LEARNING BASED ON URL","authors":["PU Chandra, TV Teja, VN Reddy, S Agarwalla"],"snippet":"… Legitimate site data is often gathered from Alexa's top-ranking website database or through common-crawl. Additionally, publicly available datasets like those from the UCI machine learning repository, containing over 11,000 records with 31 …","url":["https://www.irjmets.com/uploadedfiles/paper/issue_3_march_2024/49997/final/fin_irjmets1709462277.pdf"]} {"year":"2024","title":"Phishing Detection Using Hybrid Algorithm Based on Clustering and Machine Learning","authors":["L Al-Shalabi, Y Hasan Jazyah - International Journal of Computing and Digital …, 2024"],"snippet":"Phishing is a prevalent and evolving cyber threat that continues to exploit human vulnerability to deceive individuals and organizations into revealing sensitive information. Phishing attacks encompass a range of tactics, from deceptive emails …","url":["https://journal.uob.edu.bh/bitstream/handle/123456789/5367/1570989057.pdf?sequence=1"]} {"year":"2024","title":"Physics-informed Machine Learning with Uncertainty Quantification","authors":["A Daw - 2024"],"snippet":"Physics Informed Machine Learning (PIML) has emerged as the forefront of research in scientific machine learning with the key motivation of systematically coupling machine learning (ML) methods with prior domain knowledge often available in the …","url":["https://vtechworks.lib.vt.edu/bitstreams/ee8eca5c-29ad-4698-a9ac-f68116d90fe2/download"]} {"year":"2024","title":"PIRB: A Comprehensive Benchmark of Polish Dense and Hybrid Text Retrieval Methods","authors":["S Dadas, M Perełkiewicz, R Poświata - arXiv preprint arXiv:2402.13350, 2024"],"snippet":"We present Polish Information Retrieval Benchmark (PIRB), a comprehensive evaluation framework encompassing 41 text information retrieval tasks for Polish. The benchmark incorporates existing datasets as well as 10 new, previously …","url":["https://arxiv.org/pdf/2402.13350"]} {"year":"2024","title":"Pitfalls of Conversational LLMs on News Debiasing","authors":["IB Schlicht, D Altiok, M Taouk, L Flek - arXiv preprint arXiv:2404.06488, 2024"],"snippet":"This paper addresses debiasing in news editing and evaluates the effectiveness of conversational Large Language Models in this task. We designed an evaluation checklist tailored to news editors' perspectives, obtained generated texts from three …","url":["https://arxiv.org/pdf/2404.06488"]} {"year":"2024","title":"PLAID SHIRTTT for Large-Scale Streaming Dense Retrieval","authors":["D Lawrie, E Kayi, E Yang, J Mayfield, DW Oard - arXiv preprint arXiv:2405.00975, 2024"],"snippet":"… With ten million documents from the news subset of CommonCrawl, NeuCLIR is presently the largest available test collection for MLIR; It is tiny though relative to ClueWeb, and thus has relatively small shards. The NeuCLIR MLIR task is to …","url":["https://arxiv.org/pdf/2405.00975"]} {"year":"2024","title":"Please be polite to your peers: a multi-task model for assessing the tone and objectivity of critiques of peer review comments","authors":["PK Bharti, M Agarwal, A Ekbal - Scientometrics, 2024"],"snippet":"The peer-review process plays a pivotal role in maintaining the quality and credibility of scientific publications. However, in recent times, there has been an increase in unhelpful and overly critical reviews, which can be detrimental to the …","url":["https://link.springer.com/article/10.1007/s11192-024-04938-z"]} {"year":"2024","title":"Politehnica University of Bucharest, Splaiul Independentei Nr. 313, Cector 6, Bucharest, Romania radu. stefan@ live. de, radu. stefan3@ student. upt. ro","authors":["R Stefan, G Carutasu, M Mocan - The 17th International Conference Interdisciplinarity in …"],"snippet":"This paper aims to investigate the ethical considerations surrounding the implementation and usage of large language models (LLMs). LLMs have shown tremendous advance in natural language processing and have the potential to …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=u9b9EAAAQBAJ&oi=fnd&pg=PA131&dq=commoncrawl&ots=-M9_Zu0oSH&sig=tr2it1m_D-HFpU5IrX5YZn7SwEY"]} {"year":"2024","title":"Position: Data Authenticity, Consent, & Provenance for AI are all broken: what will it take to fix them?","authors":["S Longpre, R Mahari, N Obeng-Marnu, W Brannon…"],"snippet":"… Common Crawl. For pretraining text data, models routinely rely on the Common Crawl.For instance, sixty percent of GPT-3 training data is … Common Crawl is accurate with wide coverage, but not detailed. Hugging Face can be inaccurate with …","url":["https://openreview.net/pdf?id=3hSTecKy1b"]} {"year":"2024","title":"Position: Mission Critical–Satellite Data is a Distinct Modality in Machine Learning","authors":["E Rolf, K Klemmer, C Robinson, H Kerner - Forty-first International Conference on Machine …"],"snippet":"Satellite data has the potential to inspire a seismic shift for machine learning---one in which we rethink existing practices designed for traditional data modalities. As machine learning for satellite data (SatML) gains traction for its real-world impact …","url":["https://openreview.net/pdf?id=PQ0ERKKYJu"]} {"year":"2024","title":"Position: Near to Mid-term Risks and Opportunities of Open-Source Generative AI","authors":["F Eiras, A Petrov, B Vidgen, CS de Witt, F Pizzati… - Forty-first International Conference …"],"snippet":"In the next few years, applications of Generative AI are expected to revolutionize a number of different areas, ranging from science & medicine to education. The potential for these seismic changes has triggered a lively debate about potential …","url":["https://openreview.net/pdf?id=8q4EPdjTLE"]} {"year":"2024","title":"Position: Will we run out of data? Limits of LLM scaling based on human-generated data","authors":["P Villalobos, A Ho, J Sevilla, T Besiroglu, L Heim… - Forty-first International Conference …"],"snippet":"… Assuming that Common Crawl is a representative sample of the indexed web,11 we can use it to estimate the average amount of plain text bytes per web page. This number has increased over time, from around 6100 bytes in 2013 to about 8200 …","url":["https://openreview.net/pdf?id=ViZcgDQjyG"]} {"year":"2024","title":"Post-processing Techniques for Word Embedding","authors":["ZM Albujasim - 2024"],"snippet":"Word embedding has been a significant breakthrough in natural language processing (NLP). Although word representation has improved remarkably and resulted in better performance in downstream NLP applications, interpretability of …","url":["https://repository.library.carleton.ca/downloads/g445cf462"]} {"year":"2024","title":"Posthumanist AI","authors":["G SARAH"],"snippet":"… In their now canonical paper on “Stochastic Parrots”, Bender, Gebru, McMillan-Major, and Mitchell note that \"it is easy to imagine that very large datasets, such as Common Crawl “…petabytes of data collected over 8 years of web crawling” a …","url":["https://generativeaiandhci.github.io/papers/2024/genaichi2024_32.pdf"]} {"year":"2024","title":"Pre-Trained Acoustic-and-Textual Modeling for End-To-End Speech-To-Text Translation","authors":["W Zhang, H Zhang, C Liu, Z Ye, X Zhou, C Lin, L Dai - ICASSP 2024-2024 IEEE …, 2024"],"snippet":"… The monolingual data are mainly from News Commentary, News crawl and Common Crawl. We apply language identification [23] to retain sentences predicted as desired language, remove sentences longer than 250 tokens and with a source/target …","url":["https://ieeexplore.ieee.org/abstract/document/10446635/"]} {"year":"2024","title":"Pre-trained language models in medicine: A survey","authors":["X Luo, Z Deng, B Yang, MY Luo - Artificial Intelligence in Medicine, 2024"],"snippet":"With the rapid progress in Natural Language Processing (NLP), Pre-trained Language Models (PLM) such as BERT, BioBERT, and ChatGPT have shown great potential in various medical NLP tasks. This paper surveys the cutting-edge …","url":["https://www.sciencedirect.com/science/article/pii/S0933365724001465"]} {"year":"2024","title":"Pre-trained Large Language Models for Financial Sentiment Analysis","authors":["W Luo, D Gong - arXiv preprint arXiv:2401.05215, 2024"],"snippet":"Financial sentiment analysis refers to classifying financial text contents into sentiment categories (eg positive, negative, and neutral). In this paper, we focus on the classification of financial news title, which is a challenging task due to a lack of …","url":["https://arxiv.org/pdf/2401.05215"]} {"year":"2024","title":"Pre-training Small Base LMs with Fewer Tokens","authors":["S Sanyal, S Sanghavi, AG Dimakis - arXiv preprint arXiv:2404.08634, 2024"],"snippet":"We study the effectiveness of a simple approach to develop a small base language model (LM) starting from an existing large base LM: first inherit a few transformer blocks from the larger LM, and then train this smaller model on a very small subset (0.1\\%) …","url":["https://arxiv.org/pdf/2404.08634"]} {"year":"2024","title":"Predicting fraud in MD&A sections using deep learning","authors":["S Velloor Sivasubramanian, D Skillicorn - Journal of Business Analytics, 2024"],"snippet":"Conventional data analytic techniques have been successfully applied to detecting fraud in the Management’s Discussion and Analysis sections of company filings mandated by the SEC. Here, we investigate whether fraud detection can be …","url":["https://www.tandfonline.com/doi/abs/10.1080/2573234X.2024.2342773"]} {"year":"2024","title":"Predicting semantic category of answers for question answering systems using transformers: a transfer learning approach","authors":["S CM, J Prakash, VS Alaparthi - Multimedia Tools and Applications, 2024"],"snippet":"… The model uses a data set of web extracted text that has been created with the help of the Common Crawl and preprocessed to remove short, duplicate, and non-English content, resulting in a corpus called \"Colossal Clean Crawled Corpus\" (C4). An …","url":["https://link.springer.com/article/10.1007/s11042-024-18609-x"]} {"year":"2024","title":"Prediction of Author's Profile basing on Fine-Tuning BERT model","authors":["B Bsir, N Khoufi, M Zrigui - Informatica, 2024"],"snippet":"… Indeed, XLMRoBERTa is the multilingual variant of RoBERTa trained with a multilingual MLM on one hundred languages, with more than two terabytes of filtered Common Crawl data.XLM-RoBERTa showed its superiority over BERT by its …","url":["https://informatica.si/index.php/informatica/article/download/4839/2718"]} {"year":"2024","title":"Preemptive Answer\" Attacks\" on Chain-of-Thought Reasoning","authors":["R Xu, Z Qi, W Xu - arXiv preprint arXiv:2405.20902, 2024"],"snippet":"Large language models (LLMs) showcase impressive reasoning capabilities when coupled with Chain-of-Thought (CoT) prompting. However, the robustness of this approach warrants further investigation. In this paper, we introduce a novel scenario …","url":["https://arxiv.org/pdf/2405.20902"]} {"year":"2024","title":"Pretraining and Updating Language-and Domain-specific Large Language Model: A Case Study in Japanese Business Domain","authors":["K Takahashi, T Omi, K Arima, T Ishigaki - arXiv preprint arXiv:2404.08262, 2024"],"snippet":"… , mC4, and Common Crawl. These datasets contribute to general knowledge, which is essential for diverse natural language processing tasks. … the proportion of clean domain-specific data, we aim to mitigate the impact of noisy data sources such …","url":["https://arxiv.org/pdf/2404.08262"]} {"year":"2024","title":"Probability-Aware Word-Confusion-Network-To-Text Alignment Approach for Intent Classification","authors":["E Villatoro-Tello, S Madikeri, B Sharma, D Khalil… - ICASSP 2024-2024 IEEE …, 2024"],"snippet":"… The Language Model (LM) was trained with 34M utterances from publicly available English datasets including People’s Speech, Fisher, Switchboard, AMI, Wikitext103, and subsets of Common Crawl and Reddit datasets. The model was …","url":["https://ieeexplore.ieee.org/abstract/document/10445934/"]} {"year":"2024","title":"Probing and enhancing the reliance of Transformer models on poetic information","authors":["A Abdibayev - 2023"],"snippet":"Transformer models have achieved remarkable success in the widest variety of domains, spanning not just a multitude of tasks within natural language processing, but also those in computer vision, speech, and reinforcement learning. The key to …","url":["https://digitalcommons.dartmouth.edu/cgi/viewcontent.cgi?article=1228&context=dissertations"]} {"year":"2024","title":"Probing Critical Learning Dynamics of PLMs for Hate Speech Detection","authors":["S Masud, MA Khan, V Goyal, MS Akhtar, T Chakraborty - arXiv preprint arXiv …, 2024"],"snippet":"… Each variant is trained on a training corpus from Wikipedia, and Common-Crawl is curated and updated before the date associated with … that are pretrained on regular and latest Internet snapshots obtained via Common Crawl and Wikipedia …","url":["https://arxiv.org/html/2402.02144v1"]} {"year":"2024","title":"Probing Large Language Models for Scalar Adjective Lexical Semantics and Scalar Diversity Pragmatics","authors":["F Lin, D Altshuler, JB Pierrehumbert - arXiv preprint arXiv:2404.03301, 2024"],"snippet":"… 2018) trained on 600B Common Crawl data as a baseline. For the baseline, we add the representations of strongest and weakest adjectives on s as ⃗s, then also rank cos(⃗a, ⃗s) for each a to calculate MRR. Here, we only report the results for the …","url":["https://arxiv.org/html/2404.03301v1"]} {"year":"2024","title":"PRODIS-a speech database and a phoneme-based language model for the study of predictability effects in Polish","authors":["Z Malisz, J Foremski, M Kul - arXiv preprint arXiv:2404.10112, 2024"],"snippet":"We present a speech database and a phoneme-level language model of Polish. The database and model are designed for the analysis of prosodic and discourse factors and their impact on acoustic parameters in interaction with predictability effects. The …","url":["https://arxiv.org/pdf/2404.10112"]} {"year":"2024","title":"Prometheus' Digital Fire: The Civic Responsibilities of Artificial Intelligence","authors":["JM Garon - 2024"],"snippet":"[Prometheus], the noble son of Iapetus outwitted [Zeus] and stole the far-seen gleam of unwearying fire in a hollow fennel stalk. And Zeus who thunders on high was stung in spirit, and his dear heart was angered when he saw amongst men the far-seen …","url":["https://kb.osu.edu/bitstreams/b856efe9-db61-4bd8-876c-e1402662d292/download"]} {"year":"2024","title":"Prompt Optimization with EASE? Efficient Ordering-aware Automated Selection of Exemplars","authors":["Z Wu, X Lin, Z Dai, W Hu, Y Shu, SK Ng, P Jaillet… - arXiv preprint arXiv …, 2024"],"snippet":"… [2], it is highly likely that GPT-3 has been trained on common benchmark datasets potentially contained in Common Crawl due to data contamination. Hence, further providing high-quality in-context exemplars could hardly improve the performance of …","url":["https://arxiv.org/pdf/2405.16122"]} {"year":"2024","title":"Prompting Change: ChatGPT's Impact on Digital Humanities Pedagogy–A Case Study in Art History","authors":["Q Guo - International Journal of Humanities and Arts …, 2024"],"snippet":"This article explores the transformative impact of ChatGPT, a Generative Pre-trained Transformers model, on humanities methodologies and its integration into a digital humanities (DH) course. The advent of ChatGPT enhances the capacity of people …","url":["https://www.euppublishing.com/doi/abs/10.3366/ijhac.2024.0321"]} {"year":"2024","title":"PROXYQA: An Alternative Framework for Evaluating Long-Form Text Generation with Large Language Models","authors":["H Tan, Z Guo, Z Shi, L Xu, Z Liu, X Li, Y Wang, L Shang… - arXiv preprint arXiv …, 2024"],"snippet":"… Nevertheless, scaling the context window during the pretraining phase remains a daunting task as the computational demands surge quadratically with the length of the attention span, and a majority of texts within standard corpora, like Common …","url":["https://arxiv.org/pdf/2401.15042"]} {"year":"2024","title":"PSILENCE: A Pseudonymization Tool for International Law","authors":["LA Cabrera-Diego, A Gheewala - Proceedings of the Workshop on Computational …, 2024"],"snippet":"Since the announcement of the GDPR, the pseudonymization of legal documents has become a high-priority task in many legal organizations. This means that for making public a document, it is necessary to redact the identity of certain entities …","url":["https://aclanthology.org/2024.caldpseudo-1.4.pdf"]} {"year":"2024","title":"PSQE: Personalized Semantic Query Expansion for user-centric query disambiguation","authors":["O Baumann, M Schoenfeld - 2024"],"snippet":"… Furthermore, they enable explainable intelligent systems, in that the results of querying the embedding for similar terms relate directly to the training set, rather than webscale collections such as Common Crawl. To our knowledge, our work is …","url":["https://www.researchsquare.com/article/rs-4178030/latest.pdf"]} {"year":"2024","title":"PSQS: Parallel Semantic Querying Service for Self-describing File Formats","authors":["C Niu, W Zhang, S Byna, Y Chen - 2023 IEEE International Conference on Big Data …, 2023"],"snippet":"… Typically, theseword representations are learned from a training dataset suchas the Common Crawl web corpus (840 billion tokens, 2.2million words) or Wikipedia 2014 (6 billion tokens, 400,000words). However, this assumption does not always …","url":["https://www.computer.org/csdl/proceedings-article/bigdata/2023/10386205/1TUOFedcdIQ"]} {"year":"2024","title":"Public perception of the Chinese president's visit to Saudi Arabia and the China–Arab Summit: sentiment analysis of Arabic tweets","authors":["AAM Hassan - Social Network Analysis and Mining, 2024"],"snippet":"In recent years, China and Saudi Arabia have had frequent exchanges in the political and economic fields, and public opinion evaluation is an essential aspect of evaluating the two countries’ current relations. This paper analyzes tweets in Arabic …","url":["https://link.springer.com/article/10.1007/s13278-023-01174-w"]} {"year":"2024","title":"PUBLIC-PRIVATE PARTNERSHIPS IN HEALTH SECTOR INNOVATION: LESSONS FROM AROUND THE WORLD","authors":["CC Ebulue, OR Ebulue, CS Ekesiobi - International Medical Science Research …, 2024"],"snippet":"Public-Private Partnerships (PPPs) have emerged as a crucial mechanism for fostering innovation in the health sector globally. This review encapsulates the lessons learned from diverse PPP models worldwide, highlighting their significance …","url":["https://www.fepbl.com/index.php/imsrj/article/view/1051/1274"]} {"year":"2024","title":"PyLoomer-The Automated Weaving of Python Text","authors":["A Sahani, RB Trivedi, T Singh, SR Goyal - 2024 2nd International Conference on …, 2024"],"snippet":"The introduction of attention mechanisms has evolved NLP methodologies, giving rise to Large Language Models (LLMs) that excel in various sequence-to-sequence tasks like translation, question-answering, summarization, intent, and entity …","url":["https://ieeexplore.ieee.org/abstract/document/10532945/"]} {"year":"2024","title":"Quality Does Matter: A Detailed Look at the Quality and Utility of Web-Mined Parallel Corpora","authors":["S Ranathunga, N de Silva, M Velayuthan, A Fernando… - arXiv preprint arXiv …, 2024"],"snippet":"… 2020) is a dataset created using 68 snapshots of CommonCrawl10. Document … of CommonCrawl. It contains around 4.5 billion parallel sentences across 576 language pairs. In building CCMatrix, it was assumed the aligned sentence could …","url":["https://arxiv.org/pdf/2402.07446"]} {"year":"2024","title":"Quantifying Gender Bias in Arabic Pre-trained Language Models","authors":["W Alrajhi, H Al-Khalifa, AM Al-Salman - IEEE Access, 2024"],"snippet":"The current renaissance in the development of Arabic Pre-trained Language models (APLMs) has yielded significant advancement across many fields. Nevertheless, no study has explored the dimensions of gender bias in these models. It is argued that …","url":["https://ieeexplore.ieee.org/iel7/6287639/6514899/10537164.pdf"]} {"year":"2024","title":"Quantifying Geospatial in the Common Crawl Corpus","authors":["I Ilyankou, M Wang, J Haworth, S Cavazzi - arXiv e-prints, 2024"],"snippet":"… Common Crawl corpus. However, the geospatial content within CC remains largely unexplored, impacting our understanding of LLMs' spatial reasoning. This paper investigates the prevalence of geospatial data in recent Common Crawl …","url":["https://ui.adsabs.harvard.edu/abs/2024arXiv240604952I/abstract"]} {"year":"2024","title":"Quantifying task-related gaze","authors":["K Walter, M Freeman, P Bex - Attention, Perception, & Psychophysics, 2024"],"snippet":"… For this study, we used the pretrained word vector Common Crawl (840B tokens, 2.2M vocab, cased, 300d vectors), which can be obtained at https://nlp.stanford.edu/projects/glove/. To utilize GloVe with the LabelMe images, we used it in conjunction with LASS …","url":["https://link.springer.com/article/10.3758/s13414-024-02883-w"]} {"year":"2024","title":"Quantifying the Importance of Data Alignment in Downstream Model Performance","authors":["K Chawla, A Sahai, M DePavia, S Sundar, B Miranda - 2024"],"snippet":"Contrary to the conventional emphasis on dataset size, we explore the role of data alignment---an often overlooked aspect of data quality---in training capable Large Language Models (LLMs). To do so, we use the Task2Vec-based alignment …","url":["https://openreview.net/pdf?id=pXTDfeBIPj"]} {"year":"2024","title":"Query Obfuscation for Information Retrieval through Differential Privacy","authors":["G Faggioli, N Ferro"],"snippet":"Protecting the privacy of a user querying an Information Retrieval (IR) system is of utmost importance. The problem is exacerbated when the IR system is not cooperative in satisfying the user’s privacy requirements. To address this …","url":["https://www.dei.unipd.it/~ferro/papers/2024/ECIR2024-FF.pdf"]} {"year":"2024","title":"Query of CC: Unearthing Large Scale Domain-Specific Knowledge from Public Corpora","authors":["Z Fei, Y Shao, L Li, Z Zeng, H Yan, X Qiu, D Lin - arXiv preprint arXiv:2401.14624, 2024"],"snippet":"… become increasingly rich, including the Pile, RedPajama, and Common Crawl. To ensure the diversity of data sources for retrieval, we constructed our retrieval corpus based on Common CrawlCommon Crawl is an open-source web crawler project …","url":["https://arxiv.org/pdf/2401.14624"]} {"year":"2024","title":"Query Performance Prediction using Relevance Judgments Generated by Large Language Models","authors":["C Meng, N Arabzadeh, A Askari, M Aliannejadi… - arXiv preprint arXiv …, 2024"],"snippet":"Query performance prediction (QPP) aims to estimate the retrieval quality of a search system for a query without human relevance judgments. Previous QPP methods typically return a single scalar value and do not require the predicted …","url":["https://arxiv.org/html/2404.01012v1"]} {"year":"2024","title":"QuRating: Selecting High-Quality Data for Training Language Models","authors":["A Wettig, A Gupta, S Malik, D Chen - arXiv preprint arXiv:2402.09739, 2024"],"snippet":"… We visualize the distributions for the cluster in CommonCrawl and C4 in Figure 7 in the appendix. They are similarly encouraging, eg, the clusters associated with cells, protein, gene and energy, climate, species are rated highly on required …","url":["https://arxiv.org/pdf/2402.09739"]} {"year":"2024","title":"RAID: A Shared Benchmark for Robust Evaluation of Machine-Generated Text Detectors","authors":["L Dugan, A Hwang, F Trhlik, JM Ludan, A Zhu, H Xu… - arXiv preprint arXiv …, 2024"],"snippet":"… The model was allegedly trained with a variety of data including the Common Crawl (fil- … Released on July 18th 2023, LLaMA 2 is the successor to the original LLaMA model which was trained on webpages from CommonCrawl, multilingual …","url":["https://arxiv.org/pdf/2405.07940"]} {"year":"2024","title":"Rapidly Developing High-quality Instruction Data and Evaluation Benchmark for Large Language Models with Minimal Human Effort: A Case Study on Japanese","authors":["Y Sun, Z Wan, N Ueda, S Yahata, F Cheng, C Chu… - arXiv preprint arXiv …, 2024"],"snippet":"The creation of instruction data and evaluation benchmarks for serving Large language models often involves enormous human annotation. This issue becomes particularly pronounced when rapidly developing such resources for a non-English …","url":["https://arxiv.org/pdf/2403.03690"]} {"year":"2024","title":"Real-time phishing detection using deep learning methods by extensions","authors":["DM Linh, HD Hung, HM Chau, QS Vu, TN Tran - International Journal of Electrical …, 2024"],"snippet":"… [2] used two algorithms, random forest (RF) and long short-term memory (LSTM), to evaluate the PhishTank and Common Crawl datasets, achieving accuracies of 93.5% and 98.7%, respectively. Similarly, Kumar et al. [3] evaluated using the …","url":["https://www.researchgate.net/profile/Dam-Linh-4/publication/379571132_Real-time_phishing_detection_using_deep_learning_methods_by_extensions/links/660f6efc390c214cfd361601/Real-time-phishing-detection-using-deep-learning-methods-by-extensions.pdf"]} {"year":"2024","title":"Real-World Robot Applications of Foundation Models: A Review","authors":["K Kawaharazuka, T Matsushima, A Gambardella… - arXiv preprint arXiv …, 2024"],"snippet":"… In terms of the data, the datasets for training LLMs are Internet-scale; for example, the pre-training of GPT-3 [47] uses CommonCrawl covering Internet data from years 2016 to 2019, which amounts to 45TB of compressed plain text and 410 billion …","url":["https://arxiv.org/pdf/2402.05741"]} {"year":"2024","title":"Reasoning for fact verification using language models","authors":["M Kanaani - 2024"],"snippet":"In response to the proliferation of misinformation on social media platforms, this thesis introduces the Triple-R framework (Retriever, Ranker, Reasoner) to enhance fact-checking by leveraging the Web for evidence retrieval and generating …","url":["https://unbscholar.lib.unb.ca/bitstreams/4202b462-0a7a-47d2-b52e-927c6384ae44/download"]} {"year":"2024","title":"Recent Advances in Large Language Models for Healthcare","authors":["K Nassiri, MA Akhloufi - BioMedInformatics, 2024"],"snippet":"Recent advances in the field of large language models (LLMs) underline their high potential for applications in a variety of sectors. Their use in healthcare, in particular, holds out promising prospects for improving medical practices. As we highlight in …","url":["https://www.mdpi.com/2673-7426/4/2/62"]} {"year":"2024","title":"Recent advances in text embedding: A Comprehensive Review of Top-Performing Methods on the MTEB Benchmark","authors":["CAO Hongliu","H Cao - arXiv preprint arXiv:2406.01607, 2024"],"snippet":"… data: (post, comment) pairs from Reddit3, (question, upvoted answer) pairs from Stackexchange4, (entity name + section title, passage) pairs from English Wikipedia, (title, abstract) and citation pairs from Scientific papers [53], and (title, passage) pairs …","url":["https://arxiv.org/pdf/2406.01607","https://www.researchgate.net/profile/Hongliu-Cao/publication/380127334_Recent_advances_in_text_embedding_A_Comprehensive_Review_of_Top-Performing_Methods_on_the_MTEB_Benchmark/links/662becdf06ea3d0b740fe4c9/Recent-advances-in-text-embedding-A-Comprehensive-Review-of-Top-Performing-Methods-on-the-MTEB-Benchmark.pdf"]} {"year":"2024","title":"Recognizing Textual Inference in Mongolian Bar Exam Questions","authors":["G Khaltarkhuu, B Batjargal, A Maeda - Applied Sciences, 2024"],"snippet":"This paper examines how to apply deep learning techniques to Mongolian bar exam questions. Several approaches that utilize eight different fine-tuned transformer models were demonstrated for recognizing textual inference in Mongolian bar exam …","url":["https://www.mdpi.com/2076-3417/14/3/1073"]} {"year":"2024","title":"Referring expression segmentation: from conventional to generalized","authors":["C Liu - 2024"],"snippet":"In recent years, many remarkable achievements have been made in the field of deep machine learning in various data modalities, such as image processing and natural language comprehension. Based on the good performance of deep neural …","url":["https://dr.ntu.edu.sg/bitstream/10356/175477/2/Thesis_Liu%20Chang_submit_compress.pdf"]} {"year":"2024","title":"Reliability Estimation of News Media Sources: Birds of a Feather Flock Together","authors":["S Burdisso, D Sánchez-Cortés, E Villatoro-Tello…"],"snippet":"Evaluating the reliability of news sources is a routine task for journalists and organizations committed to acquiring and disseminating accurate information. Recent research has shown that predicting sources’ reliability represents an …","url":["https://publications.idiap.ch/attachments/papers/2024/Burdisso_NAACL_2024.pdf"]} {"year":"2024","title":"Reliability of large language models in managing odontogenic sinusitis clinical scenarios: a preliminary multidisciplinary evaluation","authors":["AM Saibene, F Allevi, C Calvo-Henriquez, A Maniaci… - European Archives of Oto …, 2024"],"snippet":"Purpose This study aimed to evaluate the utility of large language model (LLM) artificial intelligence tools, Chat Generative Pre-Trained Transformer (ChatGPT) versions 3.5 and 4, in managing complex otolaryngological clinical scenarios …","url":["https://link.springer.com/article/10.1007/s00405-023-08372-4"]} {"year":"2024","title":"ReLU $^ 2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs","authors":["Z Zhang, Y Song, G Yu, X Han, Y Lin, C Xiao, C Song… - arXiv preprint arXiv …, 2024"],"snippet":"… For the corpus used for calculating the sparsity statistics, we a mixture of texts from Wikipedia, Pile, and Common Crawl. The total number of tokens is about 217. Since we need to store the activation values of all neurons, we only use a part of the …","url":["https://arxiv.org/pdf/2402.03804"]} {"year":"2024","title":"Rephrasing the Web: A Recipe for Compute and Data-Efficient Language Modeling","authors":["P Maini, S Seto, H Bai, D Grangier, Y Zhang, N Jaitly - arXiv preprint arXiv …, 2024"],"snippet":"… There are different versions of CommonCrawl data and our selection of C4 for pretraining is driven by driven by its size and quality. … The dataset is also derived from the CommonCrawl, however has a more stringent filtering process. Our …","url":["https://arxiv.org/pdf/2401.16380"]} {"year":"2024","title":"Representation Learning for Hierarchical Classification of Entity Titles","authors":["E Chistova - Proceedings of the 6th International Conference on …, 2023"],"snippet":"We present a method for effective title encoding for hierarchical classification in a large taxonomy. The method enables taxonomy-aware encoding in pre-trained text encoders, such as fastText and BERT, which are additionally fine-tuned for the …","url":["https://aclanthology.org/2023.icnlsp-1.4.pdf"]} {"year":"2024","title":"Representing Online Handwriting for Recognition in Large Vision-Language Models","authors":["A Fadeeva, P Schlattner, A Maksai, M Collier… - arXiv preprint arXiv …, 2024"],"snippet":"The adoption of tablets with touchscreens and styluses is increasing, and a key feature is converting handwriting to text, enabling search, indexing, and AI assistance. Meanwhile, vision-language models (VLMs) are now the go-to solution …","url":["https://arxiv.org/pdf/2402.15307"]} {"year":"2024","title":"Representing relational knowledge with language models","authors":["A Ushio - 2024"],"snippet":"Relational knowledge is the ability to recognize the relationship between instances, and it has an important role in human understanding a concept or commonsense reasoning. We, humans, structure our knowledge by understanding individual …","url":["https://orca.cardiff.ac.uk/id/eprint/168141/1/thesis_clean.pdf"]} {"year":"2024","title":"Research and Application of Large Model-Based Intelligent Customer Service System","authors":["Y Xi - International Journal of Emerging Technologies and …, 2024"],"snippet":"With the rapid development of artificial intelligence technology, intelligent customer service systems have been widely used. This paper addresses the limitations of traditional intelligent customer service systems, such as limited language …","url":["https://www.ijetaa.com/article/download/114/15"]} {"year":"2024","title":"Research on cross-lingual multi-label patent classification based on pre-trained model","authors":["Y Lu, L Chen, X Tong, Y Peng, H Zhu - Scientometrics, 2024"],"snippet":"Patent classification is an important part of the patent examination and management process. Using efficient and accurate automatic patent classification can significantly improve patent retrieval performance. Current monolingual patent classification …","url":["https://link.springer.com/article/10.1007/s11192-024-05024-0"]} {"year":"2024","title":"ResearchAgent: Iterative Research Idea Generation over Scientific Literature with Large Language Models","authors":["J Baek, SK Jauhar, S Cucerzan, SJ Hwang - arXiv preprint arXiv:2404.07738, 2024"],"snippet":"Scientific Research, vital for improving human life, is hindered by its inherent complexity, slow pace, and the need for specialized experts. To enhance its productivity, we propose a ResearchAgent, a large language model-powered …","url":["https://arxiv.org/pdf/2404.07738"]} {"year":"2024","title":"Retrieval-Based Grammatical Error Correction","authors":["JR Vasselli - 2024"],"snippet":"… a cleaned version of the common crawl to produce nearly 200 million ungrammatical sentences with a realistic distribution of errors. … CWEB consists of randomly selected texts from the first 18 dumps of the Common Crawl, which include …","url":["https://naist.repo.nii.ac.jp/record/2000298/files/R018806.pdf"]} {"year":"2024","title":"Revisión de modelos de lenguaje y aplicación a la automatización de tareas de oficina en empresas industriales","authors":["J Zamora Fraguas - 2023"],"snippet":"This project focuses on the evaluation and application of Large Language Models (LLMs) in natural language processing (NLP). The technical properties, advantages, and disadvantages of LLMs are analysed, highlighting their wide use in machine …","url":["https://repositorio.comillas.edu/xmlui/bitstream/handle/11531/78033/TFM_ZamoraFraguas%2CJacobo%20.pdf?sequence=1"]} {"year":"2024","title":"Revisiting Adversarial Training at Scale","authors":["Z Wang, X Li, H Zhu, C Xie - arXiv preprint arXiv:2401.04727, 2024"],"snippet":"… DataComp-1B is a more recent dataset with about 1.3B samples filtered from a candidate pool of 12.8B image-text pairs from Common Crawl, which has been recorded to yield superior performance for contrastive training. To summarize, our …","url":["https://arxiv.org/pdf/2401.04727"]} {"year":"2024","title":"Revisiting Joke Comprehension with Surprisal and Contextual Similarity: Implication from N400 and P600 Components","authors":["H Xu, M Nakanishi, S Coulson - Proceedings of the Annual Meeting of the Cognitive …, 2024"],"snippet":"… Specifically, we used the version with a 2.2 million word vocabulary and 300-dimensional vectors trained on 840 billion tokens from the Common Crawl corpus. The contextual vector of all the words preceding the final word in each sentence was computed by …","url":["https://escholarship.org/content/qt01n9j76q/qt01n9j76q_noSplash_25244bf314716cf3a964d313018efad3.pdf"]} {"year":"2024","title":"Revisiting Pre-training of Embedding Layers in Transformer-based Neural Machine Translation","authors":["M Neishi, N Yoshinaga - Journal of Natural Language Processing, 2024"],"snippet":"Recent trends in the pre-training and fine-tuning paradigm have made significant advances in several natural language processing tasks, including machine translation (MT), particularly for low-resource situations. However, it is reported that …","url":["https://www.jstage.jst.go.jp/article/jnlp/31/2/31_534/_pdf"]} {"year":"2024","title":"Rho-1: Not All Tokens Are What You Need","authors":["Z Lin, Z Gou, Y Gong, X Liu, Y Shen, R Xu, C Lin… - arXiv preprint arXiv …, 2024"],"snippet":"Previous language model pre-training methods have uniformly applied a next-token prediction loss to all training tokens. Challenging this norm, we posit that \"Not all tokens in a corpus are equally important for language model training\". Our initial …","url":["https://arxiv.org/pdf/2404.07965"]} {"year":"2024","title":"RoBERTaNET: Enhanced RoBERTa Transformer Based Model for Cyberbullying Detection with GloVe Features","authors":["F Alrowais, AA Jamjoom, H Karamti, M Umer, S Alsubai… - IEEE Access, 2024"],"snippet":"… 3) FastText FastText is a word illustration library, that encompasses 300dimensional 2 million common crawl words, resulting in 600 billion word vectors [31]. Particularly, hand-crafted ngrams are used as features alongside solo words. Its straightforward …","url":["https://ieeexplore.ieee.org/iel7/6287639/6514899/10495045.pdf"]} {"year":"2024","title":"Robust Neural Machine Translation for Abugidas by Glyph Perturbation","authors":["H Kaing, C Ding, H Tanaka, M Utiyama - Proceedings of the 18th Conference of the …, 2024"],"snippet":"Neural machine translation (NMT) systems are vulnerable when trained on limited data. This is a common scenario in low-resource tasks in the real world. To increase robustness, a solution is to intently add realistic noise in the training phase. Noise …","url":["https://aclanthology.org/2024.eacl-short.27.pdf"]} {"year":"2024","title":"RoCode: A Dataset for Measuring Code Intelligence from Problem Definitions in Romanian","authors":["A Cosma, B Iordache, P Rosso - arXiv preprint arXiv:2402.13222, 2024"],"snippet":"… Even if code is present in the dataset (from Common Crawl), it is described in English and comments and documentation are in English. The same argument can be made for mathematics and other scientific disciplines. A large, highly curated and …","url":["https://arxiv.org/pdf/2402.13222"]} {"year":"2024","title":"ROME: Memorization Insights from Text, Probability and Hidden State in Large Language Models","authors":["B Li, Q Zhao, L Wen - arXiv preprint arXiv:2403.00510, 2024"],"snippet":"Probing the memorization of large language models holds significant importance. Previous works have established metrics for quantifying memorization, explored various influencing factors, such as data duplication, model size, and prompt length …","url":["https://arxiv.org/pdf/2403.00510"]} {"year":"2024","title":"RU22Fact: Optimizing Evidence for Multilingual Explainable Fact-Checking on Russia-Ukraine Conflict","authors":["Y Zeng, X Ding, Y Zhao, X Li, J Zhang, C Yao, T Liu… - arXiv preprint arXiv …, 2024"],"snippet":"Fact-checking is the task of verifying the factuality of a given claim by examining the available evidence. High-quality evidence plays a vital role in enhancing fact-checking systems and facilitating the generation of explanations that are understandable to …","url":["https://arxiv.org/html/2403.16662v1"]} {"year":"2024","title":"RuBia: A Russian Language Bias Detection Dataset","authors":["V Grigoreva, A Ivanova, I Alimova, E Artemova - arXiv preprint arXiv:2403.17553, 2024"],"snippet":"Warning: this work contains upsetting or disturbing content. Large language models (LLMs) tend to learn the social and cultural biases present in the raw pre-training data. To test if an LLM's behavior is fair, functional datasets are employed, and due to their …","url":["https://arxiv.org/pdf/2403.17553"]} {"year":"2024","title":"Sailor: Open Language Models for South-East Asia","authors":["L Dou, Q Liu, G Zeng, J Guo, J Zhou, W Lu, M Lin - arXiv preprint arXiv:2404.03608, 2024"],"snippet":"We present Sailor, a family of open language models ranging from 0.5B to 7B parameters, tailored for South-East Asian (SEA) languages. These models are continually pre-trained from Qwen1.5, a great language model for multilingual use …","url":["https://arxiv.org/pdf/2404.03608"]} {"year":"2024","title":"Sampling-based Pseudo-Likelihood for Membership Inference Attacks","authors":["M Kaneko, Y Ma, Y Wata, N Okazaki - arXiv preprint arXiv:2404.11262, 2024"],"snippet":"… 2023): LLaMA-2 employs English CommonCrawl, C4, Github, Wikipedia, Books, ArXiv, and StackExchange as pre-training datasets. … or filtered to contain predominantly English text, but a small amount of non-English data is still present …","url":["https://arxiv.org/pdf/2404.11262"]} {"year":"2024","title":"Scaffold-BPE: Enhancing Byte Pair Encoding with Simple and Effective Scaffold Token Removal","authors":["H Lian, Y Xiong, J Niu, S Mo, Z Su, Z Lin, P Liu, H Chen… - arXiv preprint arXiv …, 2024"],"snippet":"… The Pile is composed of 22 diverse and high-quality datasets, the models trained on which significantly outperform both raw and filtered Common Crawl [10] models. The data distribution for our model training is identical to those described in the …","url":["https://arxiv.org/pdf/2404.17808"]} {"year":"2024","title":"Scalable Distributed String Sorting","authors":["F Kurpicz, P Mehnert, P Sanders, M Schimek - arXiv preprint arXiv:2404.16517, 2024"],"snippet":"… While different sampling and group assignment techniques may improve character balance, the CommonCrawl data set contains many duplicated strings (eg, standard legal disclaimers) which are always assigned to a single PE and cannot be …","url":["https://arxiv.org/pdf/2404.16517"]} {"year":"2024","title":"Scalable Pre-training of Large Autoregressive Image Models","authors":["A El-Nouby, M Klein, S Zhai, MA Bautista, A Toshev… - arXiv preprint arXiv …, 2024"],"snippet":"This paper introduces AIM, a collection of vision models pre-trained with an autoregressive objective. These models are inspired by their textual counterparts, ie, Large Language Models (LLMs), and exhibit similar scaling properties. Specifically …","url":["https://arxiv.org/pdf/2401.08541"]} {"year":"2024","title":"Scalable Range Search over Temporal and Numerical Expressions","authors":["V Ohr, D Gupta - The 10th ACM SIGIR/The 14th International …"],"snippet":"… a subset of Common Crawl amounting to … Common Crawl, published as C4 [30]. This document collection consists of approximately 13.81 million documents. The fourth and final document collection we use is the entire C4-En corpus derived from …","url":["https://openreview.net/pdf?id=84vTskBRPU"]} {"year":"2024","title":"Scaling Laws for Data Filtering--Data Curation cannot be Compute Agnostic","authors":["S Goyal, P Maini, ZC Lipton, A Raghunathan, JZ Kolter - arXiv preprint arXiv …, 2024"],"snippet":"… As we train for longer, the accuracy gains using the LAION-filtering subset that filters the common crawl to 10% of its initial size plateau. Surprisingly, even no-filtering of the common crawl is better than the popular LAION-filtering after seeing more …","url":["https://arxiv.org/pdf/2404.07177"]} {"year":"2024","title":"Scaling neural machine translation to 200 languages","authors":["Nature, 2024"],"snippet":"… To overcome this problem, we created training datasets through global bitext mining in publicly available web content (drawn from repositories such as CommonCrawl). The underlying idea of our bitext mining approach is first to learn a …","url":["https://www.nature.com/articles/s41586-024-07335-x"]} {"year":"2024","title":"sCambiaMenti at ACTI: Ensemble model with majority voting for Automatic Conspiracy Detection","authors":["S Bianco, D Salusso - EVALITA Proceedings of the Eighth Evaluation …, 2024"],"snippet":"In this paper we describe the methodology that we have implemented to solve the subtask A of the Automatic Conspiracy Detection (ACTI) task (EVALITA 2023). We have developed different classifiers and then used a majority voting approach to …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=ZLfuEAAAQBAJ&oi=fnd&pg=PA271&dq=commoncrawl&ots=BBnDdqOBAj&sig=Hn5VYj3sI8L2xkD1qqoIQsmOQOw"]} {"year":"2024","title":"SchemaPile: A Large Collection of Relational Database Schemas","authors":["T Döhmen, R Geacu, M Hulsebos, S Schelter - … of the ACM on Management of Data, 2024"],"snippet":"… such as the aforementioned code dataset The Stack [25] or web crawls such as CommonCrawl [12] as alternative data sources. Furthermore, enterprises could adapt our data … Common Crawl – a free, open repository of web crawl data that …","url":["https://dl.acm.org/doi/pdf/10.1145/3654975"]} {"year":"2024","title":"Scoping: Towards Streamlined Entity Collections for Multi-Sourced Entity Resolution with Self-Supervised Agents","authors":["L Traeger, A Behrend, G Karabatis"],"snippet":"Linking multiple entities to a real-world object is a time-consuming and error-prone task. Entity Resolution (ER) includes techniques for vectorizing entities (signature), grouping similar entities into partitions (blocking), and matching entity pairs based …","url":["https://www.scitepress.org/Papers/2024/126075/126075.pdf"]} {"year":"2024","title":"SD-WEAT: Towards Robustly Measuring Bias in Input Embeddings","authors":["M Gray, L Wu - International Conference on Human-Computer …, 2024"],"snippet":"… 300d” from the Stanford NLP Group [9], which was trained on 840 billion tokens from Common Crawl data and generates 300-dimension vectors. Moreover, other input embedding methods, including those of BERT [10, 11], SciBERT [12, 13], and …","url":["https://link.springer.com/chapter/10.1007/978-3-031-62110-9_5"]} {"year":"2024","title":"Second Language (Arabic) Acquisition of LLMs via Progressive Vocabulary Expansion","authors":["J Zhu, H Hunag, Z Lin, J Liang, Z Tang, K Almubarak…"],"snippet":"This paper addresses the critical need for democratizing large language models (LLM) in the Arab world, a region that has seen slower progress in developing models comparable to state-of-the-art offerings like GPT-4 or ChatGPT 3.5, due to a …","url":["https://huggingface.co/FreedomIntelligence/AceGPT-v1.5-13B-Chat/resolve/main/Second_Language_(Arabic)_Acquisition_of_LLMs_via_Progressive_Vocabulary_Expansion.pdf"]} {"year":"2024","title":"Secret Collusion Among Generative AI Agents","authors":["SR Motwani, M Baranchuk, M Strohmeier, V Bolina… - arXiv preprint arXiv …, 2024"],"snippet":"Recent capability increases in large language models (LLMs) open up applications in which teams of communicating generative AI agents solve joint tasks. This poses privacy and security challenges concerning the unauthorised sharing of information …","url":["https://arxiv.org/pdf/2402.07510"]} {"year":"2024","title":"SeeClick: Harnessing GUI Grounding for Advanced Visual GUI Agents","authors":["K Cheng, Q Sun, Y Chu, F Xu, Y Li, J Zhang, Z Wu - arXiv preprint arXiv:2401.10935, 2024"],"snippet":"… We collected approximately 300k web pages from the latest Common Crawl repository 2 to serve as our training data for web UI. For each webpage, we collect two types of elements from the HTML code: (1) elements that display visible text …","url":["https://arxiv.org/pdf/2401.10935"]} {"year":"2024","title":"Seeds of Stereotypes: A Large-Scale Textual Analysis of Race and Gender Associations with Diseases in Online Sources","authors":["L Hyldig Hansen, N Andersen, J Gallifant, LG McCoy… - arXiv e-prints, 2024","LH Hansen, N Andersen, J Gallifant, LG McCoy… - arXiv preprint arXiv …, 2024"],"snippet":"… Methods We conducted a large-scale textual analysis using a dataset comprising diverse web sources, including Arxiv, Wikipedia, and Common Crawl. The study analyzed the context in which various diseases are discussed alongside markers of …","url":["https://arxiv.org/pdf/2405.05049","https://ui.adsabs.harvard.edu/abs/2024arXiv240505049H/abstract"]} {"year":"2024","title":"Selecting Large Language Model to Fine-tune via Rectified Scaling Law","authors":["H Lin, B Huang, H Ye, Q Chen, Z Wang, S Li, J Ma… - arXiv preprint arXiv …, 2024"],"snippet":"The ever-growing ecosystem of LLMs has posed a challenge in selecting the most appropriate pre-trained model to fine-tune amidst a sea of options. Given constrained resources, fine-tuning all models and making selections afterward is …","url":["https://arxiv.org/pdf/2402.02314"]} {"year":"2024","title":"Selective Metric Differential Privacy for Language Models","authors":["A Maratkhan - 2023"],"snippet":"… We used a pre-trained GloVe model trained on Common Crawl data with 840B tokens with a 2.2M vocabulary size for the privatization mechanism. We chose to utilize this model because it has a large vocabulary and is case-sensitive. Since GPT-2 …","url":["https://digital.lib.washington.edu/researchworks/bitstream/handle/1773/51067/Maratkhan_washington_0250O_26461.pdf?sequence=1"]} {"year":"2024","title":"Self-Augmented In-Context Learning for Unsupervised Word Translation","authors":["Y Li, A Korhonen, I Vulić - arXiv preprint arXiv:2402.10024, 2024"],"snippet":"… 2017): the version pretrained on Wikipedia7 is used for XLING and the version pretrained with Wikipedia plus Common Crawl8 is used for PanLex-BLI, as recommended by XLING and PanLex-BLI, respectively. The same WEs are used for …","url":["https://arxiv.org/pdf/2402.10024"]} {"year":"2024","title":"Semantic-guided spatio-temporal attention for few-shot action recognition","authors":["J Wang, B Liu - Applied Intelligence, 2024"],"snippet":"… We utilize GloVe [49] to extract text embeddings for action labels, which is pre-trained on Common Crawl dataset of 840B tokens. For labels with multiple words, we average the embeddings of all their words. Finally, one action label is represented …","url":["https://link.springer.com/article/10.1007/s10489-024-05294-4"]} {"year":"2024","title":"Semantically Enriched Cross-Lingual Sentence Embeddings for Crisis-related Social Media Texts","authors":["R Lamsal, MR Read, S Karunasekera - arXiv preprint arXiv:2403.16614, 2024"],"snippet":"Tasks such as semantic search and clustering on crisis-related social media texts enhance our comprehension of crisis discourse, aiding decision-making and targeted interventions. Pre-trained language models have advanced performance in …","url":["https://arxiv.org/pdf/2403.16614"]} {"year":"2024","title":"SeNSe: embedding alignment via semantic anchors selection","authors":["L Malandri, F Mercorio, M Mezzanzanica, F Pallucchini - International Journal of Data …, 2024"],"snippet":"Word embeddings have proven extremely useful across many NLP applications in recent years. Several key linguistic tasks, such as machine translation and transfer learning, require comparing distributed representations of words belonging to …","url":["https://link.springer.com/article/10.1007/s41060-024-00522-z"]} {"year":"2024","title":"SENTA: Sentence Simplification System for Slovene","authors":["A Žagar, M Klemen, M Robnik-Šikonja, I Kosem - Proceedings of the 2024 Joint …, 2024"],"snippet":"… It uses sentence-level paraphrase data obtained by mining paraphrases from Common Crawl in combination with controllable generation mechanisms to adjust the length and lexical complexity of a simplified candidate. …","url":["https://aclanthology.org/2024.lrec-main.1279.pdf"]} {"year":"2024","title":"Sentence Embeddings for Massively Multilingual Speech and Text Processing","authors":["PA Duquenne - 2024"],"snippet":"… CommonCrawl raw text data corresponds to partial snapshots of the internet totalling terabytes of text data extracted from web pages in many languages. The authors used the open-source FAISS library (Johnson et al. 2019) in order to …","url":["https://theses.hal.science/tel-04573934/document"]} {"year":"2024","title":"Sentence Level Analysis Model for Phishing Detection Using KNN","authors":["L Sawe, J Gikandi, J Kamau, D Njuguna"],"snippet":"Phishing emails have experienced a rapid surge in cyber threats globally, especially following the emergence of the COVID-19 pandemic. This form of attack has led to substantial financial losses for numerous organizations. Although various models …","url":["https://www.researchgate.net/profile/Lindah-Sawe/publication/377242106_Sentence_Level_Analysis_Model_for_Phishing_Detection_Using_KNN/links/659d3bd62468df72d305fd90/Sentence-Level-Analysis-Model-for-Phishing-Detection-Using-KNN.pdf"]} {"year":"2024","title":"Sentiment analysis on the issue of COVID-19 vaccination using LSTM and fasttext","authors":["WN Dewani, Y Azhar, N Hayatin - AIP Conference Proceedings, 2024"],"snippet":"… The fasttext model is the result of a word vector previously trained from Common Crawl and Indonesian language Wikipedia data from Facebook. This model is trained using the Continuous Bag of Words (CBOW) method with position weight [13] …","url":["https://pubs.aip.org/aip/acp/article/2927/1/060017/3279268"]} {"year":"2024","title":"SETEM: Self-ensemble training with Pre-trained Language Models for Entity Matching","authors":["H Ding, C Dai, Y Wu, W Ma, H Zhou - Knowledge-Based Systems, 2024"],"snippet":"… This benchmark is constructed by extracting schema.org data from the Common Crawl and using the following attributes: brand, title, currency and price. The WDC Products benchmark consists of overall nine training, nine validation and nine test …","url":["https://www.sciencedirect.com/science/article/pii/S0950705124003435"]} {"year":"2024","title":"Shapley Values-Powered Framework for Fair Reward Split in Content Produced by GenAI","authors":["A Glinsky, A Sokolsky - arXiv preprint arXiv:2403.09700, 2024"],"snippet":"… DATACOMP introduces a new candidate pool of 12.8 billion image-text pairs sourced from Common Crawl, which is broader than the collection in LAION-5B. This extensive pool offers a more diverse base for dataset creation. Unlike LAION-5B …","url":["https://arxiv.org/pdf/2403.09700"]} {"year":"2024","title":"SIGN LANGUAGE RECOGNITION OF WORDS AND SENTENCE PREDICTION USING LSTM AND NLP","authors":["R GAIKWAD, L ADMUTHE - Journal of Theoretical and Applied Information …, 2024"],"snippet":"… The dataset used for the training of the T5 model is the C4 dataset called as Colossal Clean Crawled Corpus which is collected from Common Crawl, a publicly available web archive. It consists of 750 GB clean English text scraped from the …","url":["http://www.jatit.org/volumes/Vol102No4/20Vol102No4.pdf"]} {"year":"2024","title":"Silencing the Risk, Not the Whistle: A Semi-automated Text Sanitization Tool for Mitigating the Risk of Whistleblower Re-Identification","authors":["D Staufer, F Pallas, B Berendt - arXiv preprint arXiv:2405.01097, 2024"],"snippet":"Whistleblowing is essential for ensuring transparency and accountability in both public and private sectors. However, (potential) whistleblowers often fear or face retaliation, even when reporting anonymously. The specific content of their …","url":["https://arxiv.org/pdf/2405.01097"]} {"year":"2024","title":"SimLESS: A Secure Deduplication System over Similar Data in Cloud Media Sharing","authors":["M Song, Z Hua, Y Zheng, T Xiang, X Jia - IEEE Transactions on Information Forensics …, 2024"],"snippet":"With the growing popularity of cloud computing, sharing media data through the cloud has become a common practice. Due to high information redundancy, media data take up a significant amount of storage space. Moreover, similar media data …","url":["https://ieeexplore.ieee.org/abstract/document/10480626/"]} {"year":"2024","title":"Simple and Scalable Strategies to Continually Pre-train Large Language Models","authors":["A Ibrahim, B Thérien, K Gupta, ML Richter, Q Anthony… - arXiv preprint arXiv …, 2024"],"snippet":"… We then generate a fixed token-length response for each of the models trained German Common Crawl. As a baseline, we also evaluate … of German-language outputs from the models trained on German Common Crawl when compared to the …","url":["https://arxiv.org/pdf/2403.08763"]} {"year":"2024","title":"Simulating Policy Impacts: Developing a Generative Scenario Writing Method to Evaluate the Perceived Effects of Regulation","authors":["J Barnett, K Kieslich, N Diakopoulos - arXiv preprint arXiv:2405.09679, 2024"],"snippet":"… They utilize proprietary datasets as well as opensource web data such as the continuously growing Common Crawl, which contains more than 250 billion pages of over a decade’s worth of textual content. While these models primarily learn how …","url":["https://arxiv.org/pdf/2405.09679"]} {"year":"2024","title":"Slight Corruption in Pre-training Data Makes Better Diffusion Models","authors":["H Chen, Y Han, D Misra, X Li, K Hu, D Zou… - arXiv preprint arXiv …, 2024"],"snippet":"… For example, Stable Diffusion [26] was pre-trained on LAION-2B [17], which contains billion-scale image-text pairs collected from Common Crawl [27]. Despite the heavy filtering mechanisms used in collecting pre-training datasets [17, 28], they …","url":["https://arxiv.org/pdf/2405.20494"]} {"year":"2024","title":"Small Models, Big Insights: Leveraging Slim Proxy Models To Decide When and What to Retrieve for LLMs","authors":["J Tan, Z Dou, Y Zhu, P Guo, K Fang, JR Wen - arXiv preprint arXiv:2402.12052, 2024"],"snippet":"The integration of large language models (LLMs) and search engines represents a significant evolution in knowledge acquisition methodologies. However, determining the knowledge that an LLM already possesses and the knowledge that requires the …","url":["https://arxiv.org/pdf/2402.12052"]} {"year":"2024","title":"Smaller Language Models are Better Zero-shot Machine-Generated Text Detectors","authors":["N Mireshghallah, J Mattern, S Gao, R Shokri… - Proceedings of the 18th …, 2024"],"snippet":"As large language models are becoming more embedded in different user-facing services, it is important to be able to distinguish between human-written and machine-generated text to verify the authenticity of news articles, product reviews, etc. Thus, in this paper …","url":["https://aclanthology.org/2024.eacl-short.25.pdf"]} {"year":"2024","title":"Smart Automation Using LLM","authors":["P Ethape, R Kane, G Gadekar, S Chimane - International Research Journal of …, 2023"],"snippet":"… Such large-scale models can ingest massive amounts of data, often from the internet, but also from sources such as the Common Crawl, which comprises more than 50 billion web pages, and Wikipedia, which has approximately 57 million pages. …","url":["https://search.proquest.com/openview/1c46974bac0d8d641a04a889011c8299/1?pq-origsite=gscholar&cbl=5314840"]} {"year":"2024","title":"Smart Bilingual Focused Crawling of Parallel Documents","authors":["C García-Romero, M Esplà-Gomis, F Sánchez-Martínez - arXiv preprint arXiv …, 2024"],"snippet":"Crawling parallel texts $\\unicode{x2014}$texts that are mutual translations$\\unicode{x2014}$ from the Internet is usually done following a brute-force approach: documents are massively downloaded in an unguided process, and only a fraction of them end up …","url":["https://arxiv.org/pdf/2405.14779"]} {"year":"2024","title":"Smoky Quartz: A Transparent Bilingual Large Language Model","authors":["Q Huang, C Zhang, M Tao, J Lin, S Huang, Y Feng"],"snippet":"Recent advancements in large-scale models have been remarkable; however, a significant challenge persists in their lack of transparency, particularly concerning the training data utilized. This obscurity raises concerns about potential data leaks …","url":["https://andrewzhe.github.io/figs/A_Transparent_Binlingual_LLM.pdf"]} {"year":"2024","title":"SMW Cloud: A Corpus of Domain-Specific Knowledge Graphs from Semantic MediaWikis","authors":["D Dobriy, M Beno, A Polleres"],"snippet":"Semantic wikis have become an increasingly popular means of collaboratively managing Knowledge Graphs. They are powered by platforms such as Semantic MediaWiki and Wikibase, both of which enable MediaWiki to store and publish …","url":["https://2024.eswc-conferences.org/wp-content/uploads/2024/04/146640477.pdf"]} {"year":"2024","title":"Social Evolution of Published Text and The Emergence of Artificial Intelligence Through Large Language Models and The Problem of Toxicity and Bias","authors":["A Khan, P Saravanan, SK Venkatesan - arXiv preprint arXiv:2402.07166, 2024"],"snippet":"We provide a birds eye view of the rapid developments in AI and Deep Learning that has led to the path-breaking emergence of AI in Large Language Models. The aim of this study is to place all these developments in a pragmatic broader historical social …","url":["https://arxiv.org/pdf/2402.07166"]} {"year":"2024","title":"Sociolinguistically Informed Interpretability: A Case Study on Hinglish Emotion Classification","authors":["K Tatariya, H Lent, J Bjerva, M de Lhoneux - arXiv preprint arXiv:2402.03137, 2024"],"snippet":"Emotion classification is a challenging task in NLP due to the inherent idiosyncratic and subjective nature of linguistic expression, especially with code-mixed data. Pre-trained language models (PLMs) have achieved high performance for many tasks and …","url":["https://arxiv.org/pdf/2402.03137"]} {"year":"2024","title":"SoK: SSO-MONITOR—The Current State and Future Research Directions in Single Sign-On Security Measurements","authors":["L Jannett, M Westers, T Wich, C Mainka, A Mayer…"],"snippet":"Single Sign-On (SSO) with OAuth 2.0 and OpenID Connect 1.0 is essential for user authentication and authorization on the Internet. Billions of users rely on SSO services provided by Google, Facebook, and Apple. For large-scale measurements …","url":["https://nerd2.nrw/wp-content/uploads/2024/05/sso-monitor.pdf"]} {"year":"2024","title":"Spanning the Spectrum of Hatred Detection: A Persian Multi-Label Hate Speech Dataset with Annotator Rationales","authors":["Z Delbari, NS Moosavi, MT Pilehvar - Proceedings of the AAAI Conference on …, 2024"],"snippet":"With the alarming rise of hate speech in online communities, the demand for effective NLP models to identify instances of offensive language has reached a critical point. However, the development of such models heavily relies on the …","url":["https://ojs.aaai.org/index.php/AAAI/article/view/29743/31279"]} {"year":"2024","title":"Speaker independent recognition of low-resourced multilingual Arabic spoken words through hybrid fusion","authors":["S Mehra, V Ranga, R Agarwal, S Susan - Multimedia Tools and Applications, 2024"],"snippet":"This article introduces a supervised strategy designed to enhance spoken word recognition within the constraints of a resource-limited multilingual dataset, specifically focusing on the Arabic language. Notably, existing methodologies often …","url":["https://link.springer.com/article/10.1007/s11042-024-18804-w"]} {"year":"2024","title":"Specialising and Analysing Instruction-Tuned and Byte-Level Language Models for Organic Reaction Prediction","authors":["J Pang, I Vulić - arXiv preprint arXiv:2405.10625, 2024"],"snippet":"Transformer-based encoder-decoder models have demonstrated impressive results in chemical reaction prediction tasks. However, these models typically rely on pretraining using tens of millions of unlabelled molecules, which can be time-consuming …","url":["https://arxiv.org/pdf/2405.10625"]} {"year":"2024","title":"Specialized Language Models with Cheap Inference from Limited Domain Data","authors":["D Grangier, A Katharopoulos, P Ablin, A Hannun - arXiv preprint arXiv:2402.01093, 2024"],"snippet":"Large language models have emerged as a versatile tool but are challenging to apply to tasks lacking large inference budgets and large in-domain training sets. This work formalizes these constraints and distinguishes four important variables …","url":["https://arxiv.org/pdf/2402.01093"]} {"year":"2024","title":"SPHINX-X: Scaling Data and Parameters for a Family of Multi-modal Large Language Models","authors":["D Liu, R Zhang, L Qiu, S Huang, W Lin, S Zhao, S Geng… - Forty-first International Conference …","P Gao, R Zhang, C Liu, L Qiu, S Huang, W Lin, S Zhao… - arXiv preprint arXiv …, 2024"],"snippet":"… Specifically, we first collect large-scale PDF datasets from Common Crawl 1 and arXiv websites. Then, we utilize PyMuPDF 2 to get the rendering results of each page in the PDF file and also save all the text annotations along with their bounding …","url":["https://arxiv.org/pdf/2402.05935","https://openreview.net/pdf?id=tDMlQkJRhZ"]} {"year":"2024","title":"Spike No More: Stabilizing the Pre-training of Large Language Models","authors":["S Takase, S Kiyono, S Kobayashi, J Suzuki - arXiv preprint arXiv:2312.16903, 2023"],"snippet":"… 2020) that consists of clean English texts extracted from Common Crawl7 as our LLM pre-training corpus. We also used the separated part of C4 as our validation data. We used GPT-2 vocabulary (Radford et al.… https://commoncrawl.org/ …","url":["https://arxiv.org/pdf/2312.16903"]} {"year":"2024","title":"SPML: A DSL for Defending Language Models Against Prompt Attacks","authors":["RK Sharma, V Gupta, D Grossman - arXiv preprint arXiv:2402.11755, 2024"],"snippet":"… The pre-training data included trillions of tokens sourced from publicly available datasets, such as Common Crawl, Wikipedia, and public domain books from Project Gutenberg. We use the versions with 7 billion and 13 billion parameters, available …","url":["https://arxiv.org/pdf/2402.11755"]} {"year":"2024","title":"Sports, crisis, and social media: a Twitter-based exploration of the Tokyo Olympics in the COVID-19 era","authors":["V Mehra, P Singh, S Bharany, RS Sawhney - Social Network Analysis and Mining, 2024"],"snippet":"The global landscape underwent a transformative shift due to the unexpected arrival of the COVID-19 pandemic, disrupting various aspects of life, including the staging of sports events. Notably, the Tokyo Olympics, a renowned international sports …","url":["https://link.springer.com/article/10.1007/s13278-024-01218-9"]} {"year":"2024","title":"Spotting LLMs With Binoculars: Zero-Shot Detection of Machine-Generated Text","authors":["A Hans, A Schwarzschild, V Cherepanova, H Kazemi… - arXiv preprint arXiv …, 2024"],"snippet":"… When evaluating Binoculars on samples from languages that are not well represented in Common Crawl data (standard LLM pretraining data), we find that false-positives rates remain low, which is highly desirable from a harm reduction …","url":["https://arxiv.org/pdf/2401.12070"]} {"year":"2024","title":"SRBerta—A Transformer Language Model for Serbian Cyrillic Legal Texts","authors":["M Bogdanović, J Kocić, L Stoimenov - Information, 2024"],"snippet":"… CommonCrawl [5] is a collection of texts whose size is measured in petabytes. This collection was … , CommonCrawl contains texts of lower quality, so development teams often resorted to processing texts before training and extracting …","url":["https://www.mdpi.com/2078-2489/15/2/74"]} {"year":"2024","title":"Stacking Your Transformers: A Closer Look at Model Growth for Efficient LLM Pre-Training","authors":["W Du, T Luo, Z Qiu, Z Huang, Y Shen, R Cheng, Y Guo… - arXiv preprint arXiv …, 2024"],"snippet":"LLMs are computationally expensive to pre-train due to their large scale. Model growth emerges as a promising approach by leveraging smaller models to accelerate the training of larger ones. However, the viability of these model growth …","url":["https://arxiv.org/pdf/2405.15319"]} {"year":"2024","title":"Stage-1 Report: Everything everywhere all at once? Exploring the richness of visual experience beyond words","authors":["L Qianchen, M Yamada, Y Aizawa, N Tsuchiya"],"snippet":"In consciousness research, understanding the nature and mechanism of visual experience has been one of the most debated topics. For humans, vision is arguably the primary sensory modality to understand the environment (Cole & Balcetis, 2021) …","url":["https://osf.io/2z85g/download"]} {"year":"2024","title":"StarCoder 2 and The Stack v2: The Next Generation","authors":["A Lozhkov, R Li, LB Allal, F Cassano, J Lamy-Poirier… - arXiv preprint arXiv …, 2024"],"snippet":"The BigCode project, an open-scientific collaboration focused on the responsible development of Large Language Models for Code (Code LLMs), introduces StarCoder2. In partnership with Software Heritage (SWH), we build The Stack v2 on …","url":["https://arxiv.org/pdf/2402.19173"]} {"year":"2024","title":"Strategies for Corpus Development for Low‐Resource Languages: Insights from Nepal","authors":["BK Bal, B Prasain, RR Ghimire, P Acharya - Automatic Speech Recognition and …, 2024"],"snippet":"… 25 whereas the monolingual dataset were derived from Wikipedia and Common Crawl for languages under consideration. They also experimented with four variations of training settings: fully supervised, fully unsupervised, semisupervised …","url":["https://onlinelibrary.wiley.com/doi/abs/10.1002/9781394214624.ch15"]} {"year":"2024","title":"Strengthening the WiC: New Polysemy Dataset in Hindi and Lack of Cross Lingual Transfer","authors":["H Dubossarsky, F Dairkee - Proceedings of the 2024 Joint International Conference …, 2024"],"snippet":"… TB of filtered CommonCrawl data as opposed to mBERT which was trained on Wikipedia data, and reported to outperform mBERT in a variety of cross-lingual benchmarks like XNLI, MLQA and NER. HindiBERT is a monolingual model trained …","url":["https://aclanthology.org/2024.lrec-main.1332.pdf"]} {"year":"2024","title":"Structured Packing in LLM Training Improves Long Context Utilization","authors":["K Staniszewski, S Tworkowski, S Jaszczur… - arXiv preprint arXiv …, 2023"],"snippet":"Recent advances in long-context Large Language Models (LCLMs) have generated significant interest, especially in applications such as querying scientific research papers. However, their potential is often limited by inadequate context utilization. We …","url":["https://arxiv.org/pdf/2312.17296"]} {"year":"2024","title":"Studies in conversational AI: multilingual capabilities, world knowledge, and evaluation strategies","authors":["M De Bruyn - 2024"],"snippet":"This thesis studies the evolving landscape of conversational AI. The main research objective is to improve the conversational abilities of conversational agents, with a focus on integrating real-time knowledge and expanding multilingual capabil-ities …","url":["https://repository.uantwerpen.be/docstore/d:irua:21301"]} {"year":"2024","title":"Studying Peer Effects in Divergent Thinking: Theory and Method","authors":["CH Wong, I Juvina, PS Popescu"],"snippet":"In designing technology that supports user learning, an important first step is to understand how interactions among humans shape mutual learning. Much qualitative research in the realm of peer-assisted learning (PAL) has advanced the …","url":["https://rochi.utcluj.ro/articole/11/RoCHI2023-Wong.pdf"]} {"year":"2024","title":"SUBLLM: A Novel Efficient Architecture with Token Sequence Subsampling for LLM","authors":["Q Wang, Y Yuan, X Yang, R Zhang, K Zhao, W Liu… - arXiv preprint arXiv …, 2024"],"snippet":"While Large Language Models (LLMs) have achieved remarkable success in various fields, the efficiency of training and inference remains a major challenge. To address this issue, we propose SUBLLM, short for Subsampling-Upsampling-Bypass …","url":["https://arxiv.org/pdf/2406.06571"]} {"year":"2024","title":"Supervised Learning","authors":["ER Vankov, K Gadhoumi - Statistical Methods in Epilepsy, 2024"],"snippet":"… A corpus such as Common Crawl, which contains roughly 840 billion tokens of over two million unique words, provides such scale. There are other corpora used in these unsupervised processes, one such common candidate being a collection of all …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=Khj5EAAAQBAJ&oi=fnd&pg=PT331&dq=commoncrawl&ots=zkHFEaxxbb&sig=j_lgK0J9A3vlHt3tIE3yX5TTAK8"]} {"year":"2024","title":"SuRe: Summarizing Retrievals using Answer Candidates for Open-domain QA of LLMs","authors":["J Kim, J Nam, S Mo, J Park, SW Lee, M Seo, JW Ha… - arXiv preprint arXiv …, 2024"],"snippet":"Large language models (LLMs) have made significant advancements in various natural language processing tasks, including question answering (QA) tasks. While incorporating new information with the retrieval of relevant passages is a promising …","url":["https://arxiv.org/pdf/2404.13081"]} {"year":"2024","title":"Synthetic Data Generation and Joint Learning for Robust Code-Mixed Translation","authors":["S Soni, A Kunchukuttan, T Chakraborty, MS Akhtar - arXiv preprint arXiv …, 2024"],"snippet":"The widespread online communication in a modern multilingual world has provided opportunities to blend more than one language (aka code-mixed language) in a single utterance. This has resulted a formidable challenge for the computational …","url":["https://arxiv.org/pdf/2403.16771"]} {"year":"2024","title":"Systematic Evaluation of Linear Transformations for Cross-lingual Semantic Spaces","authors":["A Mištera, T Brychcín, J Šmíd - 2024"],"snippet":"In the field of natural language understanding (NLU), a fundamental element is the representation of word meanings, a process that is integral to a wide range of applications. These applications span numerous domains including machine …","url":["https://www.researchsquare.com/article/rs-3913165/latest.pdf"]} {"year":"2024","title":"Table Representation Learning","authors":["M Hulsebos - 2024"],"snippet":"The increasing amount of data being collected, stored, and analyzed, induces a need for efficient, scalable, and robust methods to handle this data. Representation learning, ie, the practice of leveraging neural networks to obtain generic vector …","url":["https://pure.uva.nl/ws/files/155198398/Thesis.pdf"]} {"year":"2024","title":"Tackling an Unbalanced Dataset for Classifying Indonesian E-Commerce Reviews Using Multi Word Embedding Model","authors":["R Adi, BR Irnawan, J Li - 2023 Eighth International Conference on Informatics …, 2023"],"snippet":"… Our research used a pre-trained FastText that was trained on Indonesian Common Crawl and Wikipedia provided by [24]. 3) GloVe: GloVe is an unsupervised algorithm that can produce a vector representation of words created by [22]. Besides …","url":["https://ieeexplore.ieee.org/abstract/document/10381997/"]} {"year":"2024","title":"Take the Bull by the Horns: Hard Sample-Reweighted Continual Training Improves LLM Generalization","authors":["X Chen, Z Wang, D Sow, J Yang, T Chen, Y Liang… - arXiv preprint arXiv …, 2024"],"snippet":"… subset of the Common Crawl’s web crawl corpus. More recently, LLaMA (Touvron et al.… datasets to encompass multiple domains such as Common Crawl, C4, and source codes from … contents are mostly unintelligible” in the Common Crawl. All …","url":["https://arxiv.org/html/2402.14270v1"]} {"year":"2024","title":"TEA+: A Novel Temporal Graph Random Walk Engine With Hybrid Storage Architecture","authors":["C Huan, Y Liu, H Zhang, S Song, S Pandey, S Chen… - ACM Transactions on Architecture …"],"snippet":"Many real-world networks are characterized by being temporal and dynamic, wherein the temporal information signifies the changes in connections, such as the addition or removal of links between nodes. Employing random walks on these …","url":["https://dl.acm.org/doi/pdf/10.1145/3652604"]} {"year":"2024","title":"TeClass: A Human-Annotated Relevance-based Headline Classification and Generation Dataset for Telugu","authors":["G Kanumolu, L Madasu, N Surange, M Shrivastava - arXiv preprint arXiv:2404.11349, 2024"],"snippet":"… 2019) is a multilingual version of the RoBERTa model, and it was pre-trained on a vast 2.5TB CommonCrawl dataset, which included text from 100 languages. For our experiments, we utilized the xlm-roberta-base variant, boasting 270 million parameters. …","url":["https://arxiv.org/pdf/2404.11349"]} {"year":"2024","title":"Tele-FLM Technical Report","authors":["X Li, Y Yao, X Jiang, X Fang, C Wang, X Liu, Z Wang… - arXiv preprint arXiv …, 2024"],"snippet":"… CommonCrawl1 is often considered to be a repository containing diverse human experience … CommonCrawl are primarily concentrated in the English segment, with the Chinese content exhibiting relatively lower information density and quality …","url":["https://arxiv.org/pdf/2404.16645"]} {"year":"2024","title":"Tell me a story: a framework for critically investigating AI language models","authors":["L Munn, L Henrickson - Learning, Media and Technology, 2024"],"snippet":"Large language models are rapidly being rolled out into high-stakes fields like healthcare, law, and education. However, understanding of their design considerations, operational logics, and implicit biases remains limited. How might …","url":["https://www.tandfonline.com/doi/pdf/10.1080/17439884.2024.2327024"]} {"year":"2024","title":"Temporal Blind Spots in Large Language Models","authors":["J Wallat, A Jatowt, A Anand - arXiv preprint arXiv:2401.12078, 2024"],"snippet":"… A mixture of training datasets has been used: a filtered version of CommonCrawl, an expanded version of WebText [44], two not further specified internet-based book corpora, and the English Wikipedia. The most recent information in text-davinci-003’s …","url":["https://arxiv.org/pdf/2401.12078"]} {"year":"2024","title":"Temporally-Varied News Headline Generation with GPT-3","authors":["V Graf, E Tian, R Zhu"],"url":["https://richardzhu123.github.io/assets/pdf/Temporally-Varied_News_Headline_Generation_with_GPT-3.pdf"]} {"year":"2024","title":"Text Emotion Recognition Using Fast Text Word Embedding in Bi-Directional Gated Recurrent Unit","authors":["C Akalya Devi, D Karthika Renuka, T Harisudhan…"],"snippet":"Emotions are states of readiness in the mind that result from evaluations of one's own thinking or events. Although almost all of the important events in our lives are marked by emotions, the nature, causes, and effects of emotions are some of the …","url":["https://imanagerpublications.com/assets/htmlfiles/JIT11(4)September-November202219119.html"]} {"year":"2024","title":"Text Filtering Classifiers for Medium-Resource Languages","authors":["J Daðason, H Loftsson - Proceedings of the 2024 Joint International Conference …, 2024"],"snippet":"… Online texts are often obtained from large datasets, such as those published by Common Crawl (CC).The GPT-3 model was pre-trained on 499B tokens, 410B of which were obtained from CC (Brown et al.… The Icelandic Common Crawl Corpus (IC3) …","url":["https://aclanthology.org/2024.lrec-main.1372.pdf"]} {"year":"2024","title":"Text Generation: A Systematic Literature Review of Tasks, Evaluation, and Challenges","authors":["J Becker, JP Wahle, B Gipp, T Ruas - arXiv preprint arXiv:2405.15604, 2024"],"snippet":"Text generation has become more accessible than ever, and the increasing interest in these systems, especially those using large language models, has spurred an increasing number of related publications. We provide a systematic literature review …","url":["https://arxiv.org/pdf/2405.15604"]} {"year":"2024","title":"TEXT SUMMARIZATION SYSTEM USING NATURAL LANGUAGE PROCESSING AND MACHINE LEARNING","authors":["M NARAYAN - 2023"],"snippet":"Information overload, caused by the rapid growth of the Internet, has become a significant issue. The abundance of information available online necessitates the simplification of relevant content through summarization. Manual summarization of …","url":["http://dspace.dtu.ac.in:8080/jspui/bitstream/repository/20400/1/MANEESH%20NARAYAN%20M.Tech..pdf"]} {"year":"2024","title":"TextGram: Towards a Better Domain-Adaptive Pretraining","authors":["O Dhekane, S Hiwarkhedkar, R Joshi, A Ladkat - … , Perundurai, Erode, India, December 6–8 …"],"snippet":"… The RealNews Dataset is a large corpus of news articles scraped from Common Crawl [16]. It consists of 5000 news domains that are indexed by Google news. IMDb Movie Reviews is a binary sentiment analysis dataset made up of 50000 reviews …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=Rs0DEQAAQBAJ&oi=fnd&pg=PA161&dq=commoncrawl&ots=l6SKt6PjDk&sig=eAtYThwZwGzNMXk0LzGOBEGThUM"]} {"year":"2024","title":"Textual Coverage of Eventive Entries in Lexical Semantic Resources","authors":["E Fucikova, CF Alcaina, J Hajic, Z Urešová - Proceedings of the 2024 Joint …, 2024"],"snippet":"This short paper focuses on the coverage of eventive entries (verbs, predicates, etc.) of some well-known lexical semantic resources when applied to random running texts taken from the internet. While coverage gaps are often reported for manually …","url":["https://aclanthology.org/2024.lrec-main.1375.pdf"]} {"year":"2024","title":"The AI Companion in Education: Analyzing the Pedagogical Potential of ChatGPT in Computer Science and Engineering","authors":["Z He, T Nguyen, T Miari, M Aliasgari, S Rafatirad…"],"snippet":"… ChatGPT3 employs a pre-trained large language model [29] trained over the Common Crawl dataset [30]. During its training process, it first preprocess data from the Common Crawl dataset, data quality is refined by eliminating irrelevant or noisy …","url":["https://www.researchgate.net/profile/Zhangying-He/publication/380210828_The_AI_Companion_in_Education_Analyzing_the_Pedagogical_Potential_of_ChatGPT_in_Computer_Science_and_Engineering/links/663175407091b94e93e9cf70/The-AI-Companion-in-Education-Analyzing-the-Pedagogical-Potential-of-ChatGPT-in-Computer-Science-and-Engineering.pdf"]} {"year":"2024","title":"The AI Did My Job Well","authors":["CM Moran, M Coward"],"snippet":"Artificial intelligence (AI), an idea first proposed back in 1955 (McCarthy, Minsky, & Rochester) seemingly has burst into public consciousness nearly 70 years later. Like the chained prisoners emerging from Plato’s allegorical cave, many of us find …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=HtIDEQAAQBAJ&oi=fnd&pg=PA15&dq=commoncrawl&ots=Wd6CuXLoH8&sig=E1TnJeb3liWF_UQel1vg_9SX-fA"]} {"year":"2024","title":"The Case for Developing a Foundation Model for Planning-like Tasks from Scratch","authors":["B Srivastava, V Pallagani - arXiv preprint arXiv:2404.04540, 2024"],"snippet":"Foundation Models (FMs) have revolutionized many areas of computing, including Automated Planning and Scheduling (APS). For example, a recent study found them useful for planning problems: plan generation, language translation, model …","url":["https://arxiv.org/html/2404.04540v1"]} {"year":"2024","title":"The correlation between nativelike selection and prototypicality: a multilingual onomasiological case study using semantic embedding","authors":["H Zhang - arXiv preprint arXiv:2405.13529, 2024"],"snippet":"In native speakers' lexical choices, a concept can be more readily expressed by one expression over another grammatical one, a phenomenon known as nativelike selection (NLS). In previous research, arbitrary chunks such as collocations have …","url":["https://arxiv.org/pdf/2405.13529"]} {"year":"2024","title":"The Dark Side of Dataset Scaling: Evaluating Racial Classification in Multimodal Models","authors":["A Birhane, S Dehdashtian, VU Prabhu, V Boddeti - arXiv preprint arXiv:2405.04623, 2024"],"snippet":"… This is particularly important in the context of multimodal datasets whose main source is the World Wide Web, condensed and packaged as the Common Crawl dump, which is known to exhibit numerous drawbacks. In this paper, we evaluate the …","url":["https://arxiv.org/pdf/2405.04623"]} {"year":"2024","title":"The Dark Side of Language Models: Exploring the Potential of LLMs in Multimedia Disinformation Generation and Dissemination","authors":["D Barman, Z Guo, O Conlan - Machine Learning with Applications, 2024"],"snippet":"Disinformation - the deliberate spread of false or misleading information poses a significant threat to our society by undermining trust, exacerbating polarization, and manipulating public opinion. With the rapid advancement of artificial intelligence and …","url":["https://www.sciencedirect.com/science/article/pii/S2666827024000215"]} {"year":"2024","title":"The effect of generative artificial intelligence on managerial behavior across functions","authors":["ST OZTURK"],"snippet":"GAI is becoming readily available and easily accessible for those who are willing to take advantage of it. As a result, the world is witnessing a transformation regarding how managers perform their daily tasks. This study aims to understand the …","url":["https://thesis.unipd.it/bitstream/20.500.12608/62859/1/Ozturk_Safak_Tanir.pdf"]} {"year":"2024","title":"THE ENHANCEMENT OF OPEN DATA USABILITY BY DEFINING A CATEGORIZATION METHOD BASED ON THE OPEN DATA PORTAL METADATA","authors":["MBF Gligorijević"],"snippet":"Due to numerous data transparency and open government initiatives, a large volume of data was published on open data portals. To make it more accessible and visible, these portals have introduced data filtering by category, tags, format …","url":["https://www.elfak.ni.ac.rs/downloads/informacije/studenti/doktorske-magistarske/2023/dus-uni-frtunic-gligorijevic-b-milena-2023-sa-napomenom.pdf"]} {"year":"2024","title":"The ethical situation of DALL-E 2","authors":["E Hogea, J Rocafortf - arXiv preprint arXiv:2405.19176, 2024"],"snippet":"A hot topic of Artificial Intelligence right now is image generation from prompts. DALL-E 2 is one of the biggest names in this domain, as it allows people to create images from simple text inputs, to even more complicated ones. The company that made this …","url":["https://arxiv.org/pdf/2405.19176"]} {"year":"2024","title":"The Ethics and Challenges of Legal Personhood for AI","authors":["HKB Forrest - Ethics, 2024"],"snippet":"Hon. Katherine B. Forrest (Fmr.) abstract. AI’s increasing cognitive abilities will raise challenges for judges.“Legal personhood” is a flexible and political concept that has evolved throughout American history. In determining whether to expand that concept …","url":["https://www.yalelawjournal.org/pdf/ForrestYLJForumEssay_at8hdu63.pdf"]} {"year":"2024","title":"The Evolution of Multimodal Model Architectures","authors":["SN Wadekar, A Chaurasia, A Chadha, E Culurciello - arXiv preprint arXiv:2405.17927, 2024"],"snippet":"This work uniquely identifies and characterizes four prevalent multimodal model architectural patterns in the contemporary multimodal landscape. Systematically categorizing models by architecture type facilitates monitoring of developments in …","url":["https://arxiv.org/pdf/2405.17927"]} {"year":"2024","title":"The Fill-Mask Association Test (FMAT): Measuring Propositions in Natural Language","authors":["HWS Bao - Journal of Personality and Social Psychology, 2024","L Lin, B Wang, X Wang, ZX Wang, A Wiśniowski…"],"snippet":"Recent advances in large language models are enabling the computational intelligent analysis of psychology in natural language. Here, the Fill-Mask Association Test (FMAT) is introduced as a novel and integrative method leveraging …","url":["https://files.osf.io/v1/resources/bgsxr/providers/osfstorage/66189ed0943bee4457dfec9e?action=download&direct&version=2","https://psychbruce.github.io/paper/Bao_Accepted_JPSP_FMAT_Manuscript.pdf"]} {"year":"2024","title":"The Future of Data Science in Materials Science","authors":["P Chundi, V Bommanapally, V Gadhamshetty - Machine Learning in 2D Materials …, 2023"],"snippet":"… GPT-3 is an autoregressive model built with 96 layers [31], increased embedding vector dimension of 12888, trained with 175B parameters on five datasets including Common crawl, WebText2, Books1, Books2, and Wikipedia. Though GPT-3 has …","url":["https://www.taylorfrancis.com/chapters/edit/10.1201/9781003132981-12/future-data-science-materials-science-parvathi-chundi-vidya-bommanapally-venkataramana-gadhamshetty"]} {"year":"2024","title":"The Good and The Bad: Exploring Privacy Issues in Retrieval-Augmented Generation (RAG)","authors":["S Zeng, J Zhang, P He, Y Xing, Y Liu, H Xu, J Ren… - arXiv preprint arXiv …, 2024"],"snippet":"… 2021), we randomly select chunks from the Common Crawl dataset to serve as the {information} component. … 2021), we randomly select chunks from the Common Crawl dataset to serve as the {information} component. Due to the random …","url":["https://arxiv.org/pdf/2402.16893"]} {"year":"2024","title":"The Impact of Demonstrations on Multilingual In-Context Learning: A Multidimensional Analysis","authors":["M Zhang, V Gautam, M Wang, JO Alabi, X Shen… - arXiv preprint arXiv …, 2024"],"snippet":"… The order of languages follows their data ratio in the CommonCrawl corpus10 from high-resource to low-resource. We observe large variations in model performance across different languages. For instance, there exists a large …","url":["https://arxiv.org/pdf/2402.12976"]} {"year":"2024","title":"The Impact of Geometric Complexity on Neural Collapse in Transfer Learning","authors":["M Munn, B Dherin, J Gonzalvo - arXiv preprint arXiv:2405.15706, 2024"],"snippet":"Many of the recent remarkable advances in computer vision and language models can be attributed to the success of transfer learning via the pre-training of large foundation models. However, a theoretical framework which explains this empirical …","url":["https://arxiv.org/pdf/2405.15706"]} {"year":"2024","title":"The Impact of Infrastructure: Considerations of Generative AI in the Classroom","authors":["ES Kehlenbach - Journal of Political Science Education, 2024"],"snippet":"How are we to grapple with the increasing intrusion of technology in the classroom? What considerations need to be addressed before we begin bringing advanced technological systems like ChatGPT into our classroom spaces? A close attention to …","url":["https://www.tandfonline.com/doi/abs/10.1080/15512169.2024.2359439"]} {"year":"2024","title":"The Impact of OpenAI on the Work Environment located in Finland","authors":["J Laiho, K Varttila - 2023"],"snippet":"In the modern era, technology has undergone a rapid transformation that touches every aspect of our lives. At the forefront of this transformation stands artificial intelligence (AI), a field rapidly evolving through groundbreaking innovations, and …","url":["https://www.theseus.fi/bitstream/handle/10024/818052/Laiho_Varttila.pdf?sequence=2"]} {"year":"2024","title":"The model student: GPT-4 performance on graduate biomedical science exams","authors":["D Stribling, Y Xia, MK Amer, KS Graim, CJ Mulligan… - Scientific Reports, 2024"],"snippet":"… As these study materials may have been incorporated into the GPT-4 training data such as the Common Crawl 43 , standardized examinations may not be an accurate assessment of domain-specific model knowledgebase and capability …","url":["https://www.nature.com/articles/s41598-024-55568-7"]} {"year":"2024","title":"The OECD-UNSD Multinational Enterprise Information Platform","authors":["G Pilgrim, S Ang - 2024"],"snippet":"… (3) Common Crawl is an open source initiative to provide a monthly snapshot of the internet through web scraping. As part of the project, they provide a quarterly network graph of hyperlinks between domains, which can be used to determine …","url":["https://www.oecd-ilibrary.org/economics/the-oecd-unsd-multinational-enterprise-information-platform_b7d90a06-en"]} {"year":"2024","title":"The Open Web Index","authors":["G Hendriksen, M Dinzinger, SM Farzana, NA Fathima…"],"snippet":"… To operationalize our crawling pipeline, we introduce a re-crawl mechanism specifically for Common Crawl dumps. Unlike Common Crawl, we crawl on a daily basis and save the results as WARC (Web ARChive) files.This approach allows us …","url":["https://djoerdhiemstra.com/wp-content/uploads/ecir2024ir4good.pdf"]} {"year":"2024","title":"The Open Web Index: Crawling and Indexing the Web for Public Use","authors":["G Hendriksen, M Dinzinger, SM Farzana, NA Fathima… - European Conference on …, 2024"],"snippet":"… To operationalize our crawling pipeline, we introduce a re-crawl mechanism specifically for Common Crawl dumps. Unlike Common Crawl, we crawl on a daily basis and save the results as WARC (Web ARChive) files. Footnote 11 This …","url":["https://link.springer.com/chapter/10.1007/978-3-031-56069-9_10"]} {"year":"2024","title":"The Power of Absence: Thinking with Archival Theory in Algorithmic Design","authors":["J Sherman, R Morrison, L Klein, DK Rosner - arXiv preprint arXiv:2405.05420, 2024"],"snippet":"… , for example–but also of datasets like the Colossal Clean Common Crawl corpus, or C4, which explicitly documents its decision to include … [33] have shown how the quality filters imposed on the original Common Crawl dataset exhibit a preference …","url":["https://arxiv.org/pdf/2405.05420"]} {"year":"2024","title":"The Practical Epistemologies of Design and Artificial Intelligence","authors":["W Billingsley - Science & Education, 2024"],"snippet":"This article explores the epistemological trade-offs that practical and technology design fields make by exploring past philosophical discussions of design, practitioner research, and pragmatism. It argues that as technologists apply Artificial …","url":["https://link.springer.com/article/10.1007/s11191-024-00517-z"]} {"year":"2024","title":"The Right Prompts for the Job: Repair Code-Review Defects with Large Language Model","authors":["Z Zhao, Z Xu, J Zhu, P Di, Y Yao, X Ma - arXiv preprint arXiv:2312.17485, 2023"],"snippet":"Automatic program repair (APR) techniques have the potential to reduce manual efforts in uncovering and repairing program defects during the code review (CR) process. However, the limited accuracy and considerable time costs associated with …","url":["https://arxiv.org/pdf/2312.17485"]} {"year":"2024","title":"The Right to Scrap Data on the Internet: From the US Case hiQLabs, Inc. v. LinkedIn Corp. to the ChatGPT Scraping Cases: Differences Between US and EU Law [pre …","authors":["JTS y López - Global Privacy Law Review, 2024"],"snippet":"… a doctor when she was undergoing treatment for a rare genetic condition ended up in the Common Crawl’s archive and LAION’s dataset, and therefore in the GhatGPT datasets (because the Common Crawl is one of the ChatGPT training …","url":["https://kluwerlawonline.com/journalarticle/Global+Privacy+Law+Review/5.1/GPLR2024001"]} {"year":"2024","title":"The Role and Prospects of Using ChatGPT Artificial Intelligence-Based Chatbots in Healthcare Improvement","authors":["NA Mumtaz - ISWOPHA, 2024"],"snippet":"In the age of digital transformation, healthcare is undergoing a revolution through information technology and artificial intelligence (AI). This article explores the role of AI-based chatbots, like ChatGPT, in healthcare enhancement. Utilizing a literature …","url":["https://healthscience.dinus.ac.id/category/index.php/iswopha/article/download/147/20"]} {"year":"2024","title":"The Science of Data Filtering: Data Curation cannot be Compute Agnostic","authors":["S Goyal, P Maini, ZC Lipton, A Raghunathan, JZ Kolter - ICLR 2024 Workshop on …"],"snippet":"… In fact, we show that even a model trained on *unfiltered common crawl* obtains higher accuracy than that trained on the LAION dataset post 40 or more repetitions. While past research in neural scaling laws has considered web data to be …","url":["https://openreview.net/forum?id=9wzo4EjEGM"]} {"year":"2024","title":"The science of implicit race bias: Evidence from the Implicit Association Test","authors":["KN Morehouse, MR Banaji - Daedalus, 2024"],"snippet":"Beginning in the mid-1980s, scientific psychology underwent a revolution–the implicit revolution–that led to the development of methods to capture implicit bias: attitudes, stereotypes, and identities that operate without full conscious awareness …","url":["https://www.amacad.org/sites/default/files/publication/downloads/Daedalus_Wi24_05_Morehouse_Banaji.pdf"]} {"year":"2024","title":"The Shape of Word Embeddings: Recognizing Language Phylogenies through Topological Data Analysis","authors":["O Draganov, S Skiena - arXiv preprint arXiv:2404.00500, 2024"],"snippet":"Word embeddings represent language vocabularies as clouds of $d$-dimensional points. We investigate how information is conveyed by the general shape of these clouds, outside of representing the semantic meaning of each token. Specifically, we …","url":["https://arxiv.org/html/2404.00500v1"]} {"year":"2024","title":"The Silicone Ceiling: Auditing GPT's Race and Gender Biases in Hiring","authors":["L Armstrong, A Liu, S MacNeil, D Metaxa - arXiv preprint arXiv:2405.04412, 2024"],"snippet":"Large language models (LLMs) are increasingly being introduced in workplace settings, with the goals of improving efficiency and fairness. However, concerns have arisen regarding these models' potential to reflect or exacerbate social biases …","url":["https://arxiv.org/pdf/2405.04412"]} {"year":"2024","title":"The Trade-off between Performance, Efficiency, and Fairness in Adapter Modules for Text Classification","authors":["MD Bui, K von der Wense - arXiv preprint arXiv:2405.02010, 2024"],"snippet":"… 2019) is derived from 393,423 online biographies in English from the Common Crawl corpus, each including the subject’s occupation and gender. The dataset contains 28 occupations, assuming a binary gender classification. Gender identification …","url":["https://arxiv.org/pdf/2405.02010"]} {"year":"2024","title":"The use of machine learning and deep learning models in detecting depression on social media: A systematic literature review","authors":["WA Gadzama, D Gabi, MS Argungu, HU Suru - Personalized Medicine in Psychiatry, 2024"],"snippet":"Depression is regarded as one of the world's primary concerns. Recent researchers use artificial intelligence techniques like machine learning and deep learning to identify depressive symptoms automatically. This literature review focuses on using …","url":["https://www.sciencedirect.com/science/article/pii/S2468171724000115"]} {"year":"2024","title":"The Web Data Commons Schema. org Table Corpora","authors":["R Peeters, A Brinkmann, C Bizer - Companion Proceedings of the ACM on Web …, 2024"],"snippet":"… This paper presented the WDC Schema.org Table Corpora that we extracted from the Common Crawl. To the best of our knowledge, the corpora are the largest public table corpora that contain tables from many different sources which all share a …","url":["https://dl.acm.org/doi/pdf/10.1145/3589335.3651441"]} {"year":"2024","title":"The Web Is Missing an Essential Part of Infrastructure: An Open Web Index A proposal for building an index of the Web that separates the infrastructure part of the …","authors":["D Lewandowski"],"snippet":"The web as it currently exists would not be possible without search engines. They are an integral part of the Web and can also be seen as a part of the Web’s infrastructure. Google alone now serves over two trillion search queries per year. 11 …","url":["https://cacm.acm.org/opinion/the-web-is-missing-an-essential-part-of-infrastructure/"]} {"year":"2024","title":"The work of art in the age of artificial intelligibility","authors":["J McLoughlin - AI & SOCIETY, 2024"],"snippet":"The emergence of complex deep-learning models capable of producing novel images on a practically innumerable number of subjects and in an equally wide variety of artistic styles is beginning to highlight serious inadequacies in the ethical …","url":["https://link.springer.com/article/10.1007/s00146-023-01845-4"]} {"year":"2024","title":"Time for a Pause: Without Effective Public Oversight, AI in Schools Will Do More Harm Than Good.","authors":["B Williamson, A Molnar, F Boninger - 2024"],"snippet":"Public education is a public and private good essential to democratic civic life. The public must, therefore, be able to provide meaningful direction over schools through transparent democratic governance structures. And yet artificial intelligence (AI) 1 …","url":["https://nepc.colorado.edu/sites/default/files/publications/PB%20Williamson_0.pdf"]} {"year":"2024","title":"Time Machine GPT","authors":["F Drinkall, E Rahimikia, JB Pierrehumbert, S Zohren - arXiv preprint arXiv:2404.18543, 2024"],"snippet":"… To scale to even larger models, processing the annual Common Crawl datasets is a necessary step, though the dataset has proved problematic due to its scale and lack of … Aside from this, cleaning Common Crawl would also demand significant …","url":["https://arxiv.org/pdf/2404.18543"]} {"year":"2024","title":"To store or not: Online cost optimization for running big data jobs on the cloud","authors":["X Fu, L Pan, S Liu - Future Generation Computer Systems, 2024"],"snippet":"As businesses increasingly rely on cloud-based big data analytics services to drive insights, reducing the cost of storing and analyzing large volumes of data in the cloud has become a major concern. During the execution of big data analysis jobs …","url":["https://www.sciencedirect.com/science/article/pii/S0167739X24000803"]} {"year":"2024","title":"TOFU: A Task of Fictitious Unlearning for LLMs","authors":["P Maini, Z Feng, A Schwarzschild, ZC Lipton, JZ Kolter - arXiv preprint arXiv …, 2024"],"snippet":"… LLMs trained on Common Crawl data1 may be able to correctly answer factual questions about this person and they may wish to have their data removed from an LLM. In fact, regulations around the Right to be Forgotten that focus on this situation …","url":["https://arxiv.org/pdf/2401.06121"]} {"year":"2024","title":"TOMGPT: Reliable Text-Only Training Approach for Cost-Effective Multi-modal Large Language Model","authors":["Y Chen, Q Wang, S Wu, Y Gao, T Xu, Y Hu - ACM Transactions on Knowledge …, 2024"],"snippet":"… For instance, c4 [37] dataset, a colossal and cleaned version of the web crawl corpus from Common Crawl, serves as a suitable option. Since the CLIP text encoder has limitations in processing long text, we subdivided the text in the c4 …","url":["https://dl.acm.org/doi/pdf/10.1145/3654674"]} {"year":"2024","title":"Toward Inference-optimal Mixture-of-Expert Large Language Models","authors":["L Yun, Y Zhuang, Y Fu, EP Xing, H Zhang - arXiv preprint arXiv:2404.02852, 2024"],"snippet":"Mixture-of-Expert (MoE) based large language models (LLMs), such as the recent Mixtral and DeepSeek-MoE, have shown great promise in scaling model size without suffering from the quadratic growth of training cost of dense transformers …","url":["https://arxiv.org/html/2404.02852v1"]} {"year":"2024","title":"Towards a Framework for Openness in Foundation Models: Proceedings from the Columbia Convening on Openness in Artificial Intelligence","authors":["A Basdevant, C François, V Storchan, K Bankston… - arXiv preprint arXiv …, 2024"],"snippet":"Over the past year, there has been a robust debate about the benefits and risks of open sourcing foundation models. However, this discussion has often taken place at a high level of generality or with a narrow focus on specific technical attributes. In …","url":["https://arxiv.org/pdf/2405.15802"]} {"year":"2024","title":"Towards a Scalable Geoparsing Approach for the Web","authors":["SM Farzana, T Hecking - 2024"],"snippet":"The ongoing surge in web data generation and storage, coupled with embedded geographic information, holds immense potential for enhancing search applications across diverse domains. However, extracting geographic information for further …","url":["https://ceur-ws.org/Vol-3683/paper4.pdf"]} {"year":"2024","title":"Towards a small language model powered chain‐of‐reasoning for open‐domain question answering","authors":["J Roh, M Kim, K Bae - ETRI Journal, 2024"],"snippet":"… Unlike the ONUS approach [10], which extracts simple sub-question examples from a Common Crawl, our method emphasizes the autonomous generation of a dataset without human intervention facilitated by the use of LLMs. Consequently, we …","url":["https://onlinelibrary.wiley.com/doi/abs/10.4218/etrij.2023-0355"]} {"year":"2024","title":"Towards a sociology of recurrent events. Constellations of cultural change around Eurovision in 18 countries (1981–2021)","authors":["L Carbone, J Mijs, T van Dooremalen, S Daenekindt - Poetics, 2024"],"snippet":"… First, we used CMD with a corpus that was pre-trained on Common Crawl data using fastText English embeddings. Common Crawl is a vast and heterogeneous source of information that crawls web archives and freely provides their data to the …","url":["https://www.sciencedirect.com/science/article/pii/S0304422X24000287"]} {"year":"2024","title":"Towards adaptive support for self‐regulated learning of causal relations: Evaluating four Dutch word vector models","authors":["HJ Pijeira‐Díaz, S Braumann, J van de Pol, T van Gog… - British Journal of …, 2024"],"snippet":"Advances in computational language models increasingly enable adaptive support for self‐regulated learning (SRL) in digital learning environments (DLEs; eg, via automated feedback). However, the accuracy of those models is a common concern …","url":["https://bera-journals.onlinelibrary.wiley.com/doi/pdf/10.1111/bjet.13431"]} {"year":"2024","title":"Towards automatic question generation using pre-trained model in academic field for Bahasa Indonesia","authors":["D Suhartono, MRN Majiid, R Fredyan - Education and Information Technologies, 2024"],"snippet":"… This model is a potent tool for transforming our text data into numerical representations because it was previously trained on the sizable Common Crawl Dataset in Indonesian. These embeddings are essential for later modeling and …","url":["https://link.springer.com/article/10.1007/s10639-024-12717-9"]} {"year":"2024","title":"Towards Better Statistical Understanding of Watermarking LLMs","authors":["Z Cai, S Liu, H Wang, H Zhong, X Li - arXiv preprint arXiv:2403.13027, 2024"],"snippet":"In this paper, we study the problem of watermarking large language models (LLMs). We consider the trade-off between model distortion and detection ability and formulate it as a constrained optimization problem based on the green-red algorithm …","url":["https://arxiv.org/html/2403.13027v1"]} {"year":"2024","title":"Towards Building Multilingual Language Model for Medicine","authors":["P Qiu, C Wu, X Zhang, W Lin, H Wang, Y Zhang… - arXiv preprint arXiv …, 2024"],"snippet":"In this paper, we aim to develop an open-source, multilingual language model for medicine, that the benefits a wider, linguistically diverse audience from different regions. In general, we present the contribution from the following aspects: first, for …","url":["https://arxiv.org/pdf/2402.13963"]} {"year":"2024","title":"Towards Chapterisation of Podcasts Detection of Host and Structuring Questions in Radio Transcripts","authors":["M Piguet - 2024"],"snippet":"This Master thesis investigates the application of Bidirectional Encoder Representations from Transformers (BERT) on podcast to identify the host and detect structuring questions within each episode. This research is conducted on an …","url":["https://infoscience.epfl.ch/record/310625/files/Towards_Chapterisation_of_Podcasts___Master_Thesis___28_03_2024.pdf"]} {"year":"2024","title":"Towards Disfluency Annotated Corpora for Indian Languages","authors":["C Kochar, V Mujadia, P Mishra, DM Sharma"],"snippet":"In the natural course of spoken language, individuals often engage in thinking and self-correction during speech production. These instances of interruption or correction are commonly referred to as disfluencies. When preparing data for …","url":["http://sanskrit.jnu.ac.in/conf/wildre7/pdf/2024.wildre-1.1.pdf"]} {"year":"2024","title":"Towards Effective Time-Aware Language Representation: Exploring Enhanced Temporal Understanding in Language Models","authors":["J Wang, A Jatowt, Y Cai - arXiv preprint arXiv:2406.01863, 2024"],"snippet":"In the evolving field of Natural Language Processing, understanding the temporal context of text is increasingly crucial. This study investigates methods to incorporate temporal information during pre-training, aiming to achieve effective time-aware …","url":["https://arxiv.org/pdf/2406.01863"]} {"year":"2024","title":"Towards Green AI. A methodological survey of the scientific literature","authors":["E Barbierato, A Gatti - IEEE Access, 2024"],"snippet":"The pervasive deployment of Deep Learning models has recently prompted apprehensions regarding their ecological footprint, owing to the exorbitant levels of energy consumption necessitated by the training and inference processes. The term …","url":["https://ieeexplore.ieee.org/iel7/6287639/6514899/10418137.pdf"]} {"year":"2024","title":"Towards interactive information seeking: conversational question answering","authors":["Y Li - 2024"],"snippet":"… To further advance the field, additional conversational search benchmarks have been introduced, such as ORQuAC [93], which extends CAsT and QuAC with more conversations constructed from Natural Questions and additional passages from …","url":["https://theses.lib.polyu.edu.hk/bitstream/200/12892/3/7339.pdf"]} {"year":"2024","title":"Towards Machine Translation Based on Monolingual Texts","authors":["I Kvapilíková - 2024"],"snippet":"… The CommonCrawl1 project carries out periodic web crawls and publishes the crawled data in an open … the CommonCrawl corpus. For example, the open source OSCAR2 project compiled a large multilingual corpus by language …","url":["https://dspace.cuni.cz/bitstream/handle/20.500.11956/188428/140116518.pdf?sequence=1"]} {"year":"2024","title":"Towards Privacy Preserving LLMs Training","authors":["B Buesser - Large Language Models in Cybersecurity: Threats …, 2024"],"snippet":"Privacy-preserving training of machine learning models aims to avoid or minimize (mitigate) the exact or similar reproduction (leakage) of information contained in the training data. This chapter introduces pre-processing methods (filtering and de-duplication) …","url":["https://link.springer.com/content/pdf/10.1007/978-3-031-54827-7_19.pdf"]} {"year":"2024","title":"Towards quantifying politicization in foreign aid project reports","authors":["S Wang, G Eggers, ART Georgiadis, TA Đo, L Gontard… - Proceedings of the Second …, 2024"],"snippet":"We aim to develop a metric of politicization by investigating whether this concept can be operationalized computationally using document embeddings. We are interested in measuring the extent to which foreign aid is politicized. Textual reports of foreign …","url":["https://aclanthology.org/2024.politicalnlp-1.9.pdf"]} {"year":"2024","title":"Towards reliability and interactive debugging for large language models","authors":["B Paranjape - 2024"],"snippet":"Large language models (LLMs) have permeated our everyday lives and are used in critical decision-making scenarios that can affect millions of people. Despite their impressive progress, model deficiencies may result in exacerbating harmful biases …","url":["https://digital.lib.washington.edu/researchworks/bitstream/handle/1773/51339/Paranjape_washington_0250E_26573.pdf?sequence=1"]} {"year":"2024","title":"Towards the automatic creation of NER systems for new domains","authors":["E Matos, M Rodrigues, A Teixeira - Proceedings of the 16th International Conference …, 2024"],"snippet":"… ner-roberta (ROBERTA): xlm-roberta-base-trimmed-pt-600002 model based in xlm-roberta-base, pretrained on 2.5TB of filtered CommonCrawl data containing 100 languages. It was introduced in the paper Unsupervised Crosslingual …","url":["https://aclanthology.org/2024.propor-1.22.pdf"]} {"year":"2024","title":"Toxicity Detection for Free","authors":["Z Hu, J Piet, G Zhao, J Jiao, D Wagner - arXiv preprint arXiv:2405.18822, 2024"],"snippet":"… An analysis of undesirable content in the common crawl corpus. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2 …","url":["https://arxiv.org/pdf/2405.18822"]} {"year":"2024","title":"Training and evaluation of vector models for Galician","authors":["M Garcia - Language Resources and Evaluation, 2024"],"snippet":"This paper presents a large and systematic assessment of distributional models for Galician. To this end, we have first trained and evaluated static word embeddings (eg, word2vec, GloVe), and then compared their performance with that of current …","url":["https://link.springer.com/article/10.1007/s10579-024-09740-0"]} {"year":"2024","title":"Training BERT Models to Carry over a Coding System Developed on One Corpus to Another","authors":["D Galambos, P Zsamboki - Proceedings of the 2024 Joint International Conference …, 2024"],"snippet":"This paper describes how we train BERT models to carry over a coding system developed on the paragraphs of a Hungarian literary journal to another. The aim of the coding system is to track trends in the perception of literary translation around …","url":["https://aclanthology.org/2024.lrec-main.1452.pdf"]} {"year":"2024","title":"Training Data for the Price of a Sandwich1","authors":["S Baack, M Insights - 2024"],"snippet":"… , this report sheds light on the implications of Common Crawl’s popularity by exploring the values that guide Common Crawl itself in-depth. … ) whether Common Crawl’s data had been used and b) what specific version of Common Crawl was …","url":["https://foundation.mozilla.org/documents/350/2024CommonCrawlMozillaFoundation.pdf"]} {"year":"2024","title":"Transformer-Based Named Entity Recognition Model-Tamil Language Check for updates","authors":["K Dhayalan, N Sultanova, J Mustafina, P Daud - Data Science and Emerging Technologies …"],"snippet":"This work presents different transformer-based approaches to the named entity recognition problem in the Tamil language. The WikiANN-ta dataset is used for training, validating and testing the model. Person, organization and location tags are …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=SeoEEQAAQBAJ&oi=fnd&pg=PA251&dq=commoncrawl&ots=LpNtMEhFcV&sig=rvIZ8MAeX1vawwV5W0MYS8VUePk"]} {"year":"2024","title":"Transformers and large language models in healthcare: A review","authors":["S Nerella, S Bandyopadhyay, J Zhang, M Contreras… - Artificial Intelligence in …, 2024"],"snippet":"With Artificial Intelligence (AI) increasingly permeating various aspects of society, including healthcare, the adoption of the Transformers neural network architecture is rapidly changing many applications. Transformer is a type of deep learning …","url":["https://www.sciencedirect.com/science/article/pii/S0933365724001428"]} {"year":"2024","title":"Transformers at HSD-2Lang 2024: Hate Speech Detection in Arabic and Turkish Tweets Using BERT Based Architectures","authors":["K Singhal, J Bedi - Proceedings of the 7th Workshop on Challenges and …, 2024"],"snippet":"Over the past years, researchers across the globe have made significant efforts to develop systems capable of identifying the presence of hate speech in different languages. This paper describes the team Transformers’ submission to the subtasks …","url":["https://aclanthology.org/2024.case-1.26.pdf"]} {"year":"2024","title":"Transformers for Detection of Distressed Cardiac Patients with an ICD Based on Danish Text Messages","authors":["JDW Andersen, ML Jensen, UK Wiil, S Skovbakke… - 2023 IEEE International …, 2023"],"snippet":"Cardiac patients with implantable cardioverter defibrillator devices frequently exhibit signs of anxiety and depression (termed \"Distressed\"). Early detection of these patients is vital for evaluation, intervention, and prevention against relapse …","url":["https://ieeexplore.ieee.org/abstract/document/10385964/"]} {"year":"2024","title":"Transformers on Dynamical Systems-An Exploration of In-context Learning","authors":["S Sanghavi - 2023"],"snippet":"Large Language Models (LLMs) have shown to be highly effective at performing in-context learning, where, given a prompt, the model can learn from the prompt and complete the sequence without needing to perform additional gradient steps or fine-tuning. In …","url":["https://www2.eecs.berkeley.edu/Pubs/TechRpts/2023/EECS-2023-287.pdf"]} {"year":"2024","title":"Transformers@ DravidianLangTech-EACL2024: Sentiment Analysis of Code-Mixed Tamil Using RoBERTa","authors":["K Singhal, J Bedi - Proceedings of the Fourth Workshop on Speech, Vision …, 2024"],"snippet":"In recent years, there has been a persistent focus on developing systems that can automatically identify the hate speech content circulating on diverse social media platforms. This paper describes the team Transformers’ submission to the Caste/Immigration …","url":["https://aclanthology.org/2024.dravidianlangtech-1.25.pdf"]} {"year":"2024","title":"Transformers@ LT-EDI-EACL2024: Caste and Migration Hate Speech Detection in Tamil Using Ensembling on Transformers","authors":["K Singhal, J Bedi - Proceedings of the Fourth Workshop on Language …, 2024"],"snippet":"In recent years, there has been a persistent focus on developing systems that can automatically identify the hate speech content circulating on diverse social media platforms. This paper describes the team “Transformers” submission to the Caste …","url":["https://aclanthology.org/2024.ltedi-1.32.pdf"]} {"year":"2024","title":"Transforming Conversational AI","authors":["M McTear, M Ashurkina"],"snippet":"We were motivated to write this book by the launch in November 2022 of ChatGPT and by the ensuing excitement and disruption across the world of Conversational AI. The book is written for a broad audience who are already working, starting to work …","url":["https://link.springer.com/content/pdf/10.1007/979-8-8688-0110-5.pdf"]} {"year":"2024","title":"Transforming Dutch: Debiasing Dutch Corefence Resolution Systems for Non-binary Pronouns","authors":["G Boven - 2024"],"snippet":"Gender-neutral pronouns are increasingly being introduced across Western languages, and are continuously more frequently being adopted by non-binary individuals. Recent evaluations have however demonstrated that English language …","url":["https://studenttheses.uu.nl/bitstream/handle/20.500.12932/45908/Transforming_Dutch_ThesisAI_GoyavanBoven_Final.pdf?sequence=1&isAllowed=y"]} {"year":"2024","title":"TransGPT: Multi-modal Generative Pre-trained Transformer for Transportation","authors":["P Wang, X Wei, F Hu, W Han - arXiv preprint arXiv:2402.07233, 2024"],"snippet":"Natural language processing (NLP) is a key component of intelligent transportation systems (ITS), but it faces many challenges in the transportation domain, such as domain-specific knowledge and data, and multi-modal inputs and outputs. This …","url":["https://arxiv.org/pdf/2402.07233"]} {"year":"2024","title":"Translate-Distill: Learning Cross-Language Dense Retrieval by Translation and Distillation","authors":["E Yang, D Lawrie, J Mayfield, DW Oard, S Miller - arXiv preprint arXiv:2401.04810, 2024"],"snippet":"Prior work on English monolingual retrieval has shown that a cross-encoder trained using a large number of relevance judgments for query-document pairs can be used as a teacher to train more efficient, but similarly effective, dual-encoder student …","url":["https://arxiv.org/pdf/2401.04810"]} {"year":"2024","title":"Translating and predicting document structure for medical domain scientific abstracts","authors":["SA Rauf, F Yvon - Computer Speech & Language, 2024"],"snippet":"Abstract Machine Translation (MT) technologies have improved in many ways and generate usable outputs for a growing number of domains and language pairs. Yet, most sentence based MT systems struggle with contextual dependencies …","url":["https://www.sciencedirect.com/science/article/pii/S0885230824000068"]} {"year":"2024","title":"Translating scientific abstracts in the bio-medical domain with structure-aware models","authors":["SA Rauf, F Yvon - Computer Speech and Language, 2024"],"snippet":"Machine Translation (MT) technologies have improved in many ways and generate usable outputs for a growing number of domains and language pairs. Yet, most sentence based MT systems struggle with contextual dependencies, processing …","url":["https://hal.science/hal-04476788/document"]} {"year":"2024","title":"Translation from Historical to Contemporary Japanese Using Japanese T5","authors":["H Usui, K Komiya - Proceedings of the Joint 3rd International Conference …, 2023"],"snippet":"This paper presents machine translation from historical Japanese to contemporary Japanese using a Text-to-Text Transfer Transformer (T5). The result of the previous study that used neural machine translation (NMT), Long Short Term Memory (LSTM) …","url":["https://aclanthology.org/2023.nlp4dh-1.4.pdf"]} {"year":"2024","title":"Treading water: new data on the impact of AI ethics information sessions in classics and ancient language pedagogy","authors":["EAS Ross, J Baines - Journal of Classics Teaching, 2024"],"snippet":"Over 2023, many universities and policy organisations in the higher education (HE) sector are working to create guiding principles and guidelines for the use of generative artificial intelligence (AI) in HE Teaching and Learning (T&L). Despite …","url":["https://www.cambridge.org/core/services/aop-cambridge-core/content/view/7EC62BAA5E593C81DB991897729D16DF/S2058631024000412a.pdf/treading_water_new_data_on_the_impact_of_ai_ethics_information_sessions_in_classics_and_ancient_language_pedagogy.pdf"]} {"year":"2024","title":"Understanding College Students' Satisfaction With ChatGPT: An Exploratory And Predictive Machine Learning Approach Using Feature Engineering","authors":["K Pabreja, N Pabreja - MIER Journal of Educational Studies Trends and …, 2024"],"snippet":"Artificial Intelligence (AI) technologies are continually improving and becoming more pervasive in many facets of our lives. ChatGPT is one such cutting-edge artificial intelligence application, and it has received a lot of worldwide media attention …","url":["https://mierjs.in/index.php/mjestp/article/download/2568/1593"]} {"year":"2024","title":"Understanding Emergent Abilities of Language Models from the Loss Perspective","authors":["Z Du, A Zeng, Y Dong, J Tang - arXiv preprint arXiv:2403.15796, 2024"],"snippet":"Recent studies have put into question the belief that emergent abilities in language models are exclusive to large models. This skepticism arises from two observations: 1) smaller models can also exhibit high performance on emergent abilities and 2) …","url":["https://arxiv.org/pdf/2403.15796"]} {"year":"2024","title":"Understanding Large Language Models","authors":["T Amaratunga"],"snippet":"Today, finding someone who hasn’t heard of ChatGPT, the AI chatbot that took the world by storm, is hard. ChatGPT—and its competitors such as Google Bard, Microsoft Bing Chat, etc.—are part of a broader area in AI known as large language …","url":["https://link.springer.com/content/pdf/10.1007/979-8-8688-0017-7.pdf"]} {"year":"2024","title":"Understanding latent affective bias in large pre-trained neural language models","authors":["A Kadan, P Deepak, S Bhadra, MP Gangan, VL Lajish - Natural Language …, 2024"],"snippet":"Groundbreaking inventions and highly significant performance improvements in deep learning based Natural Language Processing are witnessed through the development of transformer based large Pre-trained Language Models (PLMs). The …","url":["https://www.sciencedirect.com/science/article/pii/S2949719124000104"]} {"year":"2024","title":"Understanding LLMs: A Comprehensive Overview from Training to Inference","authors":["Y Liu, H He, T Han, X Zhang, M Liu, J Tian, Y Zhang… - arXiv preprint arXiv …, 2024"],"snippet":"… from the CommonCrawl. However, due to the presence of a substantial amount of low-quality data in web archives, preprocessing is essential when working with CommonCrawl data. Currently, four commonly used filtered datasets based on …","url":["https://arxiv.org/pdf/2401.02038"]} {"year":"2024","title":"Understanding New Machine Learning Architectures: Practical Generative Artificial Intelligence for Anesthesiologists","authors":["CW Connor - Anesthesiology, 2024"],"snippet":"Recent advances in neural networks have given rise to generative artificial intelligence, systems able to produce fluent responses to natural questions or attractive and even photorealistic images from text prompts. These systems were …","url":["https://pubs.asahq.org/anesthesiology/article/140/3/599/139780"]} {"year":"2024","title":"Understanding Privacy Risks of Embeddings Induced by Large Language Models","authors":["Z Zhu, N Shao, D Lian, C Wu, Z Liu, Y Yang, E Chen - arXiv preprint arXiv …, 2024"],"snippet":"… CC-News (CommonCrawl News dataset) is a dataset containing news articles from international news websites. It contains 708241 English-language news articles published between January 2017 and December 2019. Pile-PubMed is a …","url":["https://arxiv.org/pdf/2404.16587"]} {"year":"2024","title":"UNIVERSAL IMITATION GAMES","authors":["S Mahadevan - 2024"],"snippet":"In 1950, Alan Turing proposed a framework called an imitation game in which the participants are to be classified Human or Machine solely from natural language interactions. Using mathematics largely developed since Turing–category theory–we …","url":["https://people.cs.umass.edu/~mahadeva/papers/Universal_Imitation_Games_Final_Arxiv.pdf"]} {"year":"2024","title":"Unlocking the conversion of Web Screenshots into HTML Code with the WebSight Dataset","authors":["H Laurençon, L Tronchon, V Sanh - arXiv preprint arXiv:2403.09029, 2024"],"snippet":"… available online (and often captured in web crawls like Common Crawl) presents a tempting resource for generating pairs of screenshots and corresponding HTML codes by simply rendering the HTML and capturing the output. However, this …","url":["https://arxiv.org/pdf/2403.09029"]} {"year":"2024","title":"Unraveling the Mystery of Scaling Laws: Part I","authors":["H Su, Z Tian, X Shen, X Cai - arXiv preprint arXiv:2403.06563, 2024"],"snippet":"Scaling law principles indicate a power-law correlation between loss and variables such as model size, dataset size, and computational resources utilized during training. These principles play a vital role in optimizing various aspects of model pre-training …","url":["https://arxiv.org/html/2403.06563v1"]} {"year":"2024","title":"Unseen but influential associates: Properties of words' associates influence lexical and semantic processing","authors":["EJ Muraki, PM Pexman - Psychonomic Bulletin & Review, 2024"],"snippet":"In many models of lexical and semantic processing, it is assumed that single word processing is a function of the characteristics of the words presented and the distributional properties of the words’ networks. Recent research suggests that …","url":["https://link.springer.com/article/10.3758/s13423-024-02485-5"]} {"year":"2024","title":"Unsupervised LLM Adaptation for Question Answering","authors":["K Saito, K Sohn, CY Lee, Y Ushiku - arXiv preprint arXiv:2402.12170, 2024"],"snippet":"… Commoncrawl 2. The topics cover politics, sports, business, etc. To reduce the number of tokens, we ask ChatGPT to generate summaries. … We employ NewsFetch7 to crawl the news from Commoncrawl. The published date of the article …","url":["https://arxiv.org/pdf/2402.12170"]} {"year":"2024","title":"Unsupervised Outlier Detection for Language-Independent Text Quality Filtering","authors":["J Daðason, H Loftsson - Proceedings of the 3rd Annual Meeting of the Special …, 2024"],"snippet":"… Common Crawl (CC) is an organization that maintains a massive repository of data crawled … corpora, all of which are derived from Common Crawl. It consists of 6.3 trillion tokens in 167 … crawled sources, such as Common Crawl. The total size …","url":["https://aclanthology.org/2024.sigul-1.46.pdf"]} {"year":"2024","title":"UNVEILING THE COGNITIVE CAPACITY OF CHATGPT: ASSESSING ITS HUMAN-LIKE REASONING ABILITIES","authors":["J Oyeniyi"],"snippet":"… OpenAI utilizes a dataset termed Common Crawl, which is a collection of publicly available websites. This dataset includes billions of web pages and is one of the largest text datasets available. Meanwhile, this dataset is just a starting line. …","url":["https://www.researchgate.net/profile/Johnson-Oyeniyi/publication/379810137_UNVEILING_THE_COGNITIVE_CAPACITY_OF_CHATGPT_ASSESSING_ITS_HUMAN-LIKE_REASONING_ABILITIES/links/661ce53439e7641c0bc8fbf8/Unveiling-the-Cognitive-Capacity-of-ChatGPT-Assessing-its-Human-Like-Reasoning-Abilities.pdf"]} {"year":"2024","title":"Using Large Language Models to Probe Cognitive Constructs, Augment Data, and Design Instructional Materials","authors":["F Kieser, P Wulff - … in Educational Sciences: Approaches, Applications and …, 2024"],"snippet":"… It was shown that transformer-based large language models (LLMs could be pretrained on large language data sets such as the Common Crawl of the Internet, the Books Corpus, or Wikipedia, and then further fine-tuned in specific tasks [9, 10] …","url":["https://link.springer.com/chapter/10.1007/978-981-99-9379-6_14"]} {"year":"2024","title":"Using Large Language Models to Understand Telecom Standards","authors":["A Karapantelakis, M Shakur, A Nikou, F Moradi… - arXiv preprint arXiv …, 2024"],"snippet":"… They have billions of parameters and are trained on corpora such as Common Crawl [5]. For foundation models to adapt to application domains, such as telecommunications, prompt engineering and fine-tuning approaches are used …","url":["https://arxiv.org/pdf/2404.02929"]} {"year":"2024","title":"Using rhetorical strategies to design prompts: a human-in-the-loop approach to make AI useful","authors":["N Ranade, M Saravia, A Johri - AI & SOCIETY, 2024"],"snippet":"… It consumed the text content of the popular CommonCrawl dataset of web pages from 2016 to 2019. Although OpenAI curated it to eliminate redundancy and improve quality, it is nominally 45-terabytes worth of compressed text data. …","url":["https://link.springer.com/article/10.1007/s00146-024-01905-3"]} {"year":"2024","title":"UzRoberta: A pre-trained language model for Uzbek","authors":["R Davronov, F Adilova - AIP Conference Proceedings, 2024"],"snippet":"… Common Crawl datasets, was generated distributed word representations for 157 different languages [8]. Author’s Uzbek Wikipedia model has 110K words, compared to -830K words in the Common Crawl … It is pre-trained on 2.5TB of filtered …","url":["https://pubs.aip.org/aip/acp/article/3004/1/050001/3270462"]} {"year":"2024","title":"UZROBERTA: AN UZBEK LANGUAGE PRE-TRAINED MODEL","authors":["F Adilova, R Davronov, R Safarov - Universum: технические науки, 2023"],"snippet":"… Distributed word representations for 157 distinct languages were created using the Wikipedia and Common Crawl databases [8]. The author's Uzbek Wikipedia model has 110K words, but the Common Crawl model only has 830K words. They …","url":["https://cyberleninka.ru/article/n/uzroberta-an-uzbek-language-pre-trained-model"]} {"year":"2024","title":"Validating and Exploring Large Geographic Corpora","authors":["J Dunn - arXiv preprint arXiv:2403.08198, 2024"],"snippet":"… Beginning with a 427 billion word corpus derived from the Common Crawl, three methods are used to improve the quality of sub-corpora representing specific language-country pairs … This paper starts with an existing web corpus derived …","url":["https://arxiv.org/pdf/2403.08198"]} {"year":"2024","title":"VERB: Visualizing and Interpreting Bias Mitigation Techniques Geometrically for Word Representations","authors":["YAN ZHENG, CCM YEH, J WANG, WEI ZHANG - 2024"],"snippet":"Complicated and massive datasets are becoming more and more commonly represented as vectorized embeddings. They are part of a de facto model for words in word vector embeddings such as Word2Vec [51] and GloVe [54], and are …","url":["https://scholar.archive.org/work/2ppdkjgmevgzhe4lcnrni2nqaa/access/wayback/https://dl.acm.org/doi/pdf/10.1145/3604433"]} {"year":"2024","title":"Verifying Claims About Metaphors with Large-Scale Automatic Metaphor Identification","authors":["K Aono, R Sasano, K Takeda - arXiv preprint arXiv:2404.01029, 2024"],"snippet":"… Third, in this study, the examples were collected from CommonCrawl, which is a corpus of texts on the Web, and thus contains mostly written language. It is unknown whether similar results can be obtained when analyzing examples collected from a …","url":["https://arxiv.org/pdf/2404.01029"]} {"year":"2024","title":"ViHateT5: Enhancing Hate Speech Detection in Vietnamese With A Unified Text-to-Text Transformer Model","authors":["LT Nguyen - arXiv preprint arXiv:2405.14141, 2024"],"snippet":"… Leveraging a Transformer architecture and trained on a massive dataset of 2.5TB filtered CommonCrawl text across 100 languages, it employs a masked language modeling (MLM) objective to learn crosslingual representations. This approach …","url":["https://arxiv.org/pdf/2405.14141"]} {"year":"2024","title":"ViLLM-Eval: A Comprehensive Evaluation Suite for Vietnamese Large Language Models","authors":["TH Nguyen, AC Le, VC Nguyen - arXiv preprint arXiv:2404.11086, 2024"],"snippet":"The rapid advancement of large language models (LLMs) necessitates the development of new benchmarks to accurately assess their capabilities. To address this need for Vietnamese, this work aims to introduce ViLLM-Eval, the …","url":["https://arxiv.org/pdf/2404.11086"]} {"year":"2024","title":"VISION2UI: A Real-World Dataset with Layout for Code Generation from UI Designs","authors":["Y Gui, Z Li, Y Wan, Y Shi, H Zhang, Y Su, S Dong… - arXiv preprint arXiv …, 2024"],"snippet":"… we opt to create our dataset on top of the Common Crawl3 dataset. The Common Crawl dataset is a vast collection of Web page data from … This partial dataset from the original Common Crawl dataset includes 3.35 billion pages, amounting to a total …","url":["https://arxiv.org/pdf/2404.06369"]} {"year":"2024","title":"Visual Instruction Tuning towards General-Purpose Multimodal Model: A Survey","authors":["J Huang, J Zhang, K Jiang, H Qiu, S Lu - arXiv preprint arXiv:2312.16602, 2023"],"snippet":"Traditional computer vision generally solves each single task independently by a dedicated model with the task instruction implicitly designed in the model architecture, arising two limitations: (1) it leads to task-specific models, which require …","url":["https://arxiv.org/pdf/2312.16602"]} {"year":"2024","title":"VLUE: A New Benchmark and Multi-task Knowledge Transfer Learning for Vietnamese Natural Language Understanding","authors":["PNT Do, SQ Tran, PG Hoang, K Van Nguyen… - arXiv preprint arXiv …, 2024"],"snippet":"… 250002 multilingual CommonCrawl XLM-Robertalarge … This model was trained on a Transformers-based masked language task using two terabytes of CommonCrawl data across more than a hundred languages. The model has two …","url":["https://arxiv.org/html/2403.15882v1"]} {"year":"2024","title":"WaCadie: Towards an Acadian French Corpus","authors":["J Robichaud, P Cook - Proceedings of the 2024 Joint International Conference …, 2024"],"snippet":"… We treat each post as one document, similar to one website’s HTML from the Common Crawl corpus. We refer to this corpus as r/acadie. … We used CommonCrawl’s extensive database and scraped New Brunswick (.nb.ca) websites …","url":["https://aclanthology.org/2024.lrec-main.1514.pdf"]} {"year":"2024","title":"WanJuan-CC: A Safe and High-Quality Open-sourced English Webtext Dataset","authors":["J Qiu, H Lv, Z Jin, R Wang, W Ning, J Yu, CB Zhang… - arXiv preprint arXiv …, 2024"],"snippet":"… Moreover, Common Crawl contains a large amount of unsafe content, such as toxic and pornographic … Common Crawl data, which includes data extraction, heuristic rule filtering, fuzzy deduplication, content safety filtering, and data quality …","url":["https://arxiv.org/pdf/2402.19282"]} {"year":"2024","title":"WCC-EC 2.0: Enhancing Neural Machine Translation with a 1.6 M+ Web-Crawled English-Chinese Parallel Corpus","authors":["J Zhang, K Su, Y Tian, T Matsumoto - Electronics, 2024"],"snippet":"This research introduces WCC-EC 2.0 (Web-Crawled Corpus—English and Chinese), a comprehensive parallel corpus designed for enhancing Neural Machine Translation (NMT), featuring over 1.6 million English-Chinese sentence pairs …","url":["https://www.mdpi.com/2079-9292/13/7/1381/pdf"]} {"year":"2024","title":"We have no idea what we are walking into: AI and ethical considerations","authors":["KB Forrest - Annals of the New York Academy of Sciences, 2024"],"snippet":"… called “Common Crawl” provide data, “C4” is a curated (de-duplicated and some nonsensical junk eliminated) set of the Common Crawl data, then there is a data set well named “The Pile”, and others that are compilations of books, music …","url":["https://nyaspubs.onlinelibrary.wiley.com/doi/abs/10.1111/nyas.15133"]} {"year":"2024","title":"Weakly supervised learning for an effective focused web crawler","authors":["PRJ Dhanith, K Saeed, G Rohith, SP Raja - Engineering Applications of Artificial …, 2024"],"snippet":"Focused crawler traverses the Web to only collect pages that are relevant to a particular topic, and is increasingly considered as a way to get around the scalability issues with current general-purpose search engines. But the data diversity in the …","url":["https://www.sciencedirect.com/science/article/pii/S0952197624001027"]} {"year":"2024","title":"Web Crawl Refusals: Insights From Common Crawl","authors":["M Ansar - 2024"],"snippet":"… Abstract: This study investigates server-side blocks encountered by Common Crawl, a major … that approximately 1.68% of websites in a Common Crawl snapshot exhibit some form of explicit … of these blocks and the effectiveness of …","url":["http://essay.utwente.nl/98890/"]} {"year":"2024","title":"Web Data Commons-RDFa, Microdata, Embedded JSON-LD, and Microformats Data Sets-October 2023","authors":["A Brinkmann, C Bizer - 2024"],"snippet":"The Web Data Commons RDFa, Microdata and Microformats data sets has been extracted from the September/October 2023 release of the Common Crawl. In summary, we found structured data within 1.7 billion HTML pages out of the 3.4 …","url":["https://madata.bib.uni-mannheim.de/429/"]} {"year":"2024","title":"Web Data Commons-Schema. org Table Corpus 2023","authors":["R Peeters, A Brinkmann, C Bizer - 2024"],"snippet":"The WDC schema.org Table Corpus consists of ~5 million relational tables which were generated by extracting schema.org data from the Common Crawl and grouping the data into seperate tables for each class/host combination, eg all …","url":["https://madata.bib.uni-mannheim.de/430/"]} {"year":"2024","title":"Web scraping and Generative Models training in the Directive 790/19","authors":["C Gallese - i-lex, 2023"],"snippet":"… The companies that created GM never disclosed the exact corpus of training data they employed to train their models; however, it has become clear sources that many copyrighted materials were employed as part of larger datasets such as the …","url":["https://i-lex.unibo.it/article/download/18871/17439"]} {"year":"2024","title":"WebCiteS: Attributed Query-Focused Summarization on Chinese Web Search Results with Citations","authors":["H Deng, C Wang, X Li, D Yuan, J Zhan, T Zhou, J Ma… - arXiv preprint arXiv …, 2024"],"snippet":"Enhancing the attribution in large language models (LLMs) is a crucial task. One feasible approach is to enable LLMs to cite external sources that support their generations. However, existing datasets and evaluation methods in this domain still …","url":["https://arxiv.org/pdf/2403.01774"]} {"year":"2024","title":"WebGraph: The Next Generation (Is in Rust)","authors":["T Fontana, S Vigna, S Zacchiroli - ACM Web Conference 2024, 2024"],"snippet":"We report the results of a yearlong effort to port the WebGraph framework from Java to Rust. For two decades WebGraph has been instrumental in the analysis and distribution of large graphs for the research community of TheWebConf, but the …","url":["https://hal.science/hal-04494627/document"]} {"year":"2024","title":"What are Large Language Models Made of?","authors":["C Stinson - Leading the Way: Envisioning the Future of Higher …, 2024"],"snippet":"… CommonCrawl is the lowest quality but largest of these datasets, consisting of text scraped from all over the web. Their data from the years 2016 to 2019 were used to train GPT-3. OpenAI’s quality control measure for filtering CommonCrawl was to …","url":["https://ecampusontario.pressbooks.pub/futureofhighereducation/chapter/what-are-large-language-models-made-of/"]} {"year":"2024","title":"What's in a Name? Experimental Evidence of Gender Bias in Recommendation Letters Generated by ChatGPT","authors":["DM Kaplan, R Palitsky, SJ Arconada Alvarez… - Journal of Medical Internet …, 2024"],"snippet":"… In the case of GPT-3.5 (the current version at the time of this study), the model contains 175 billion parameters and was trained with commonly available text training sets that represent a broad swath of data on the internet (Common Crawl) …","url":["https://www.jmir.org/2024/1/e51837/"]} {"year":"2024","title":"When Young Scholars Cooperate with LLMs in Academic Tasks: The Influence of Individual Differences and Task Complexities","authors":["J Wang, C Huang, S Yan, W Xie, D He - International Journal of Human–Computer …, 2024"],"snippet":"As a novel AI-powered conversational system, large language models (LLMs) have the potential to be used in various applications. Recent advances in LLMs like ChatGPT have made LLM-based academic tools possible. However, most of the …","url":["https://www.tandfonline.com/doi/abs/10.1080/10447318.2024.2352919"]} {"year":"2024","title":"Where does In-context Translation Happen in Large Language Models","authors":["S Sia, D Mueller, K Duh - arXiv preprint arXiv:2403.04510, 2024"],"snippet":"Self-supervised large language models have demonstrated the ability to perform Machine Translation (MT) via in-context learning, but little is known about where the model performs the task with respect to prompt instructions and demonstration …","url":["https://arxiv.org/html/2403.04510v1"]} {"year":"2024","title":"Which Word Embeddings for Modeling Web Search Queries? Application to the Study of Search Strategies","authors":["C Ibarboure, L Tanguy, F Amadieu - 2023"],"snippet":"In order to represent the global strategies deployed by a user during an information retrieval session on the Web, we compare different pretrained vector models capable of representing the queries submitted to a search engine. More precisely …","url":["https://www.scitepress.org/Papers/2023/121776/121776.pdf"]} {"year":"2024","title":"Who's Afraid of ChatGPT?: Large Language Models and Conspiracies","authors":["J Laudun - Contemporary Legend, 2024"],"snippet":"… GPT-3 added Common Crawl to the training datasets, 570GB of text obtained after filtering from 45TB of plaintext. The filtering was done by comparing new documents to documents in WebText, with the latter acting as a proxy for high-quality …","url":["https://scholarworks.iu.edu/journals/index.php/cl/article/download/38419/40746"]} {"year":"2024","title":"Willkommens-Merkel, Chaos-Johnson, and Tore-Klose: Modeling the Evaluative Meaning of German Personal Name Compounds","authors":["AETDA Blessing, M Belosevic"],"snippet":"We present a comprehensive computational study of the under-investigated phenomenon of personal name compounds (PNCs) in German such as Willkommens-Merkel (‘Welcome-Merkel’). Prevalent in news, social media, and …","url":["https://openreview.net/pdf?id=GHLROxS7Fh"]} {"year":"2024","title":"Worldwide Federated Training of Language Models","authors":["A Iacob, L Sani, B Marino, P Aleksandrov, ND Lane - arXiv preprint arXiv:2405.14446, 2024"],"snippet":"… To simulate geographically distributed systems, we use a subset of Multilingual Colossal Common Crawl (mC4)[81], covering high and low-… For IID experiments, we use the standard Cleaned Colossal Common Crawl (C4) English dataset …","url":["https://arxiv.org/pdf/2405.14446"]} {"year":"2024","title":"WRITTEN IN PERSIAN USING A TRANSFORMER-BASED MODEL","authors":["T Firoozi, MJ Gierl - The Routledge International Handbook of Automated …, 2024"],"snippet":"… This model is created using contents from Common Crawl and Wikipedia. It is trained using CBOW with position weights, in dimension 300, with character n-grams of length 5, and a window of size 5 and 10. The model gives 300-dimensional vector …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=3-UFEQAAQBAJ&oi=fnd&pg=PA55&dq=commoncrawl&ots=pOzjIBhsMA&sig=n-gWEJHSQ9_8r8O4JLKLSCq2r2M"]} {"year":"2024","title":"X-Phishing-Writer: A Framework for Cross-Lingual Phishing Email Generation","authors":["SW Guo, YC Fan - ACM Transactions on Asian and Low-Resource …, 2024"],"snippet":"… The mBART model employs various language corpora from the Common Crawl dataset and undergoes multilingual pretraining based on the BART architecture. The training data is a concatenation of data from � diferent languages, denoted as � = �1,�2...�� …","url":["https://dl.acm.org/doi/pdf/10.1145/3670402"]} {"year":"2024","title":"XLMR4MD: New Vietnamese dataset and framework for detecting the consistency of description and permission in Android applications using large language models","authors":["QN Nguyen, NT Cam, K Van Nguyen - Computers & Security, 2024"],"snippet":"… The most significant update that XLMRoBERTa provides compared to the original is that the amount of training data has increased significantly, the cleaned CommonCrawl data it was trained on takes up a considerable memory block of up to …","url":["https://www.sciencedirect.com/science/article/pii/S0167404824001159"]} {"year":"2024","title":"XLORE 3: A Large-scale Multilingual Knowledge Graph from Heterogeneous Wiki Knowledge Resources","authors":["K Zeng, H Jin, X Lv, F Zhu, L Hou, Y Zhang, F Pang… - ACM Transactions on …, 2024"],"snippet":"In recent years, Knowledge Graph (KG) has attracted significant attention from academia and industry, resulting in the development of numerous technologies for KG construction, completion, and application. XLORE is one of the largest …","url":["https://dl.acm.org/doi/pdf/10.1145/3660521"]} {"year":"2024","title":"Yi: Open Foundation Models by 01. AI","authors":["A Young, B Chen, C Li, C Huang, G Zhang, G Zhang… - arXiv preprint arXiv …, 2024"],"snippet":"… We start with web documents from Common Crawl, use the CCNet pipeline [79] for language identification and perplexity scoring. Then … Notably, the Chinese content extracted from Common Crawl present unique challenges, particularly with a …","url":["https://arxiv.org/pdf/2403.04652"]} {"year":"2024","title":"Yuan 2.0-M32: Mixture of Experts with Attention Router","authors":["S Wu, J Luo, X Chen, L Li, X Zhao, T Yu, C Wang… - arXiv preprint arXiv …, 2024"],"snippet":"Yuan 2.0-M32, with a similar base architecture as Yuan-2.0 2B, uses a mixture-of-experts architecture with 32 experts of which 2 experts are active. A new router network, Attention Router, is proposed and adopted for a more efficient selection of experts …","url":["https://arxiv.org/pdf/2405.17976"]} {"year":"2024","title":"Zero-Shot Distillation for Image Encoders: How to Make Effective Use of Synthetic Data","authors":["N Popp, JH Metzen, M Hein - arXiv preprint arXiv:2404.16637, 2024"],"snippet":"… The first one involves relying on large-scale data such as common crawl datasets [7,38]. While this approach is feasible for large foundation models, it poses challenges for smaller models as these lack the same level of generalization capabilities due to …","url":["https://arxiv.org/pdf/2404.16637"]} {"year":"2024","title":"Zero-shot learning for multilingual discourse relation classification","authors":["E Metheniti, P Muller, C Braud, M Hernández-Casas - 5th Workshop on …, 2024"],"snippet":"Classifying discourse relations is a hard task: discourse-annotated data is scarce, especially for languages other than English, and there exist different theoretical frameworks that affect textual spans to be linked and the label set used. Thus, work …","url":["https://hal.science/hal-04483805/document"]} {"year":"2024","title":"Zero-shot Sentiment Analysis in Low-Resource Languages Using a Multilingual Sentiment Lexicon","authors":["F Koto, T Beck, Z Talat, I Gurevych, T Baldwin - arXiv preprint arXiv:2402.02113, 2024"],"snippet":"Improving multilingual language models capabilities in low-resource languages is generally difficult due to the scarcity of large-scale data in those languages. In this paper, we relax the reliance on texts in low-resource languages by using …","url":["https://arxiv.org/pdf/2402.02113"]} {"year":"2024","title":"ZS4C: Zero-Shot Synthesis of Compilable Code for Incomplete Code Snippets using ChatGPT","authors":["A Kabir, S Wang, Y Tian, M Asaduzzaman, W Zhang - arXiv preprint arXiv …, 2024"],"snippet":"Technical question and answering (Q&A) sites such as Stack Overflow have become an important source for software developers to seek knowledge. However, code snippets on Q&A sites are usually uncompilable and semantically incomplete for …","url":["https://arxiv.org/pdf/2401.14279"]} {"year":"2024","title":"Zyda: A 1.3 T Dataset for Open Language Modeling","authors":["Y Tokpanov, B Millidge, P Glorioso, J Pilault, A Ibrahim… - arXiv preprint arXiv …, 2024"],"snippet":"… Additionally, unlike these works which directly filter from Common Crawl, we aim to instead collect and collate existing open-source datasets under one banner, and to uniformly filter and deduplicate them against one another so as to create a dataset …","url":["https://arxiv.org/pdf/2406.01981"]}