Why a different architecture from mini and medium?

#5
by winddude - opened

How come the small family of models have a different architecture from the mini and medium? Phi3SmallForCausalLM vs Phi3ForCausalLM.

The same question. Can you explain the specific considerations and advantages? Or is it simply an early experiment

Because of this, there is no possibility to quantize this model. Please load the GGUF quants or explain it to us how to quantize it. Not every of us have an A100 at home. As for me, I'm very excited with Phi-3-Mini, and I suspect that Phi-3-small-q4_K_M could ideally fit my Macbook M1 8G.

Please, help us! I've been talking with a lot of people, nobody managed to quantize this model!

Microsoft org

Hi !
There are a few reasons for the design choices.

  1. The tiktoken based tokenizer and larger vocab gave us some performance gains in our preliminary experiments. Additionally, tiktoken had some performance gains compared to the transformers FastTokenizers (see the tiktoken repo for a benchmark).
  2. We tried to gear the 7B model towards faster inference. As a result, the model uses block-sparse attention in conjunction with dense attention in addition to GQA. This reduces the kv-cache memory footprint considerably thereby allowing for faster inference on a continuous batcher like vLLM. We have an open PR with vLLM for integrating the blocksparse kernels there as well (this PR).

Unfortunately, because of the custom kernels, we've not been able to leverage the open-source GGUF formats from llama.cpp (as well as the subsequent quantizations it offers). However, there is active work going on to get the model onto llama.cpp (see this issue). Once that is done, the quantized models should follow :)

@bapatra , is this the reason that I saw the inferencing issue in this link? Thanks!

Microsoft org

I don't think the two issues are related. Commented on the question there !

bapatra changed discussion status to closed

Sign up or log in to comment