Papers
arxiv:2410.09223

The Same But Different: Structural Similarities and Differences in Multilingual Language Modeling

Published on Oct 11
· Submitted by ruochenz on Oct 15
Authors:
,

Abstract

We employ new tools from mechanistic interpretability in order to ask whether the internal structure of large language models (LLMs) shows correspondence to the linguistic structures which underlie the languages on which they are trained. In particular, we ask (1) when two languages employ the same morphosyntactic processes, do LLMs handle them using shared internal circuitry? and (2) when two languages require different morphosyntactic processes, do LLMs handle them using different internal circuitry? Using English and Chinese multilingual and monolingual models, we analyze the internal circuitry involved in two tasks. We find evidence that models employ the same circuit to handle the same syntactic process independently of the language in which it occurs, and that this is the case even for monolingual models trained completely independently. Moreover, we show that multilingual models employ language-specific components (attention heads and feed-forward networks) when needed to handle linguistic processes (e.g., morphological marking) that only exist in some languages. Together, our results provide new insights into how LLMs trade off between exploiting common structures and preserving linguistic differences when tasked with modeling multiple languages simultaneously.

Community

Paper author Paper submitter

We employ new tools from mechanistic interpretability in order to ask whether
the internal structure of large language models (LLMs) shows correspondence to
the linguistic structures which underlie the languages on which they are
trained. In particular, we ask (1) when two languages employ the same
morphosyntactic processes, do LLMs handle them using shared internal circuitry?
and (2) when two languages require different morphosyntactic processes, do LLMs
handle them using different internal circuitry? Using English and Chinese
multilingual and monolingual models, we analyze the internal circuitry involved
in two tasks. We find evidence that models employ the same circuit to handle
the same syntactic process independently of the language in which it occurs,
and that this is the case even for monolingual models trained completely
independently. Moreover, we show that multilingual models employ
language-specific components (attention heads and feed-forward networks) when
needed to handle linguistic processes (e.g., morphological marking) that only
exist in some languages. Together, our results provide new insights into how
LLMs trade off between exploiting common structures and preserving linguistic
differences when tasked with modeling multiple languages simultaneously.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2410.09223 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2410.09223 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2410.09223 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.