Edit model card

You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Model Card for traclm-v1-3b-base

A RedPajama-INCITE-Base-3B-v1 finetune that has undergone additional pretraining on a dataset comprised of unclassified and publically available U.S. Army doctrine.

Model Details

Model Description

This model is a research project aimed at exploring whether a pretrained LLM can acquire tangible domain-specific knowledge about the Army domain.

  • Developed by: The Research and Analysis Center - Monterey, Army Futures Command
  • License: Apache 2.0
  • Finetuned from model: RedPajama-INCITE-Base-3B-v1

Model Sources [optional]

  • Paper: TBP
  • Demo: TBP

Downstream Use

This is a raw language model that has not undergone instruction-based finetuning. Thus, output from this model is unreliable and unsuitable for downstream application. Aditional finetuning is strongly recommended.

Out-of-Scope Use

The creation of this model constitutes academic research in partnership with the Naval Postgraduate School. The purpose of this research is to inform future DoD experimentation regarding the development and application of domain-specific language models. Direct application to downstream military tasks is out of scope.

Training Details

Training Data

Link to Dataset Card TBP.

In additional to RedPajama-INCITE-Base-3B-v1's original pretraining data, this model has been further trained on 90M tokens sourced from unclassified U.S. Army Doctrine for 1 epoch. See below for additional details on training.

Training Procedure

The model was trained using the Trainer class available in the HuggingFace Transformers python library.

Training Hardware

Training was conducted on a GPU cluster belonging to the Naval Postgraduate School's Dept. of Defense Analysis. The compute node contained 16x NVIDIA V100 GPUs.

Training Hyperparameters

  • optimizer = adamw_torch
  • fp16 = True
  • epochs = 1
  • per_device_train_batch_size = 1
  • gradient_accumulation_steps = 16
  • learning_rate = 2e-5
  • weight_decay = 0.01
  • lr_scheduler_type = cosine
  • gradient_checkpointing = True

Model Card Contact

MAJ Daniel C. Ruiz (daniel.ruiz@nps.edu)

Downloads last month
0
Safetensors
Model size
2.78B params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.