--- datasets: - code_search_net language: - code pipeline_tag: text-classification inference: false tags: - code - programming-language base_model: huggingface/CodeBERTa-language-id --- # ONNX version of huggingface/CodeBERTa-language-id **This model is conversion of [huggingface/CodeBERTa-language-id](https://huggingface.co/huggingface/CodeBERTa-language-id) to ONNX.** The model was converted to ONNX using the [🤗 Optimum](https://huggingface.co/docs/optimum/index) library. ## Model Architecture **Base Model**: CodeBERTa, a variant of the RoBERTa model trained specifically for programming languages. **Modifications**: No changes except for the conversion. ## Usage ### Optimum Loading the model requires the [🤗 Optimum](https://huggingface.co/docs/optimum/index) library installed. ```python from optimum.onnxruntime import ORTModelForSequenceClassification from transformers import AutoTokenizer, pipeline tokenizer = AutoTokenizer.from_pretrained("laiyer/CodeBERTa-language-id") model = ORTModelForSequenceClassification.from_pretrained("laiyer/CodeBERTa-language-id") classifier = pipeline( task="text-classification", model=model, tokenizer=tokenizer, ) print(classifier(""" def f(x): return x**2 """)) ``` ### LLM Guard [Code scanner](https://llm-guard.com/input_scanners/code/) ## Community Join our Slack to give us feedback, connect with the maintainers and fellow users, ask questions, or engage in discussions about LLM security!