Edit model card

StableCode-Completion-Alpha-3B

Does not work, uploaded for debugging purposes

Track here: https://github.com/huggingface/swift-transformers/issues/13

Model Description

StableCode-Completion-Alpha-3B is a 3 billion parameter decoder-only code completion model pre-trained on diverse set of programming languages that were the top used languages based on the 2023 stackoverflow developer survey.

Usage

The model is intended to do single/multiline code completion from a long context window upto 16k tokens. Get started generating code with StableCode-Completion-Alpha-3B by using the following code snippet:

from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("stabilityai/stablecode-completion-alpha-3b")
model = AutoModelForCausalLM.from_pretrained(
  "stabilityai/stablecode-completion-alpha-3b",
  trust_remote_code=True,
  torch_dtype="auto",
)
model.cuda()
inputs = tokenizer("import torch\nimport torch.nn as nn", return_tensors="pt").to("cuda")
tokens = model.generate(
  **inputs,
  max_new_tokens=48,
  temperature=0.2,
  do_sample=True,
)
print(tokenizer.decode(tokens[0], skip_special_tokens=True))

Model Details

  • Developed by: Stability AI
  • Model type: StableCode-Completion-Alpha-3B models are auto-regressive language models based on the transformer decoder architecture.
  • Language(s): Code
  • Library: GPT-NeoX
  • License: Model checkpoints are licensed under the Apache 2.0 license.
  • Contact: For questions and comments about the model, please email lm@stability.ai

Model Architecture

Parameters Hidden Size Layers Heads Sequence Length
2,796,431,360 2560 32 32 16384
  • Decoder Layer: Parallel Attention and MLP residuals with a single input LayerNorm (Wang & Komatsuzaki, 2021)
  • Position Embeddings: Rotary Position Embeddings (Su et al., 2021)
  • Bias: LayerNorm bias terms only

Training

StableCode-Completion-Alpha-3B is pre-trained using a multi-stage context length extension schedule following similar work (Nijkamp et al. 2023); first pre-training at a context length of 4096 for 300 billion tokens, then fine-tuning at a context length of 16384 for another 200B tokens.

Training Dataset

The first pre-training stage relies on 300B tokens sourced from various top programming languages occuring in the stackoverflow developer survey in the starcoder-data dataset. We then finetune it on a longer context augmentation of starcoder-data dataset which increased the average token per sample to 20k.

Training Procedure

The model is pre-trained on the dataset mixes mentioned above in mixed-precision BF16), optimized with AdamW, and trained using the StarCoder tokenizer with a vocabulary size of 49k.

Use and Limitations

Intended Use

StableCode-Completion-Alpha-3B independently generates new code completions, but we recommend that you use StableCode-Completion-Alpha-3B together with the tool developed by BigCode and HuggingFace (huggingface/huggingface-vscode: Code completion VSCode extension for OSS models (github.com)), to identify and, if necessary, attribute any outputs that match training code.

Limitations and bias

This model is intended to be used responsibly. It is not intended to be used to create unlawful content of any kind, to further any unlawful activity, or to engage in activities with a high risk of physical or economic harm.

How to cite

@misc{StableCodeCompleteAlpha, 
      url={[https://huggingface.co/stabilityai/stablecode-complete-alpha-3b](https://huggingface.co/stabilityai/stablecode-complete-alpha-3b)}, 
      title={Stable Code Complete Alpha}, 
      author={Adithyan, Reshinth and Phung, Duy and Cooper, Nathan and Pinnaparaju, Nikhil and Laforte, Christian}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .

Dataset used to train justinmeans/stablecode-completion-alpha-3b-4k-coreml

Evaluation results