How to run/access this model using API calls on either "inference endpoints", "replicate" or own 64Gb Linux desktop?

#5
by ghthaker1955 - opened

Hi:

I would like to run this model. I have been testing llama-2 on replicate.com as a paid service (not expensive), but it seems this model may be easiest to run on "inference endpoint"? I also have a 64 Gb linux desktop (with a RTX 3000 Mobile/Max-Q GPU) - would be happy to run on that if cloud options too difficult.

Anyone has instructions on how to run this model in any of these manners? I have read https://github.com/epfLLM/meditron/blob/main/deployment/README.md and it gives directions about how to use this model with API calls from a client. However, I don't know if I need to run this in the cloud or if a 64 Gb Ubuntu desktop is sufficient. (I don't have a mac.)

I need to access this model by making API calls, that is my usecase.

https://ollama.ai/ makes it easy to run models. Maybe instructions how to run it with https://ollama.ai/ would be helpful ?

Sign up or log in to comment