Invalid request from sample code

#52
by vickyzhang - opened

I copy pasted the sample code for token classification for aws and used it in sagemaker notebook, but got an error.
Sample code used:

from sagemaker.huggingface import HuggingFaceModel
import sagemaker

role = sagemaker.get_execution_role()
# Hub Model configuration. https://huggingface.co/models
hub = {
    'HF_MODEL_ID':'bigscience/bloom',
    'HF_TASK':'token-classification'
}

# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
    transformers_version='4.17.0',
    pytorch_version='1.10.2',
    py_version='py38',
    env=hub,
    role=role, 
)

# deploy model to SageMaker Inference
predictor = huggingface_model.deploy(
    initial_instance_count=1, # number of instances
    instance_type='ml.m5.xlarge' # ec2 instance type
)

predictor.predict({
    'inputs': "Can you please let us know more details about your "
})

Error:

ModelError: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (400) from primary with message "{
  "code": 400,
  "type": "InternalServerException",
  "message": "\u0027bloom\u0027"
}

Code 400 is invalid request. But it's an exact copy paste of the sample code. What could have gone wrong here? Thanks.

Hello @vickyzhang ,

BLOOM got added to transformers in version 4.20.0 and you are using 4.17.0. Sadly there is currently not a newer DLC yet available.

Additionally, even if there were a version for 4.20.0 the instance type with ml.m5.xlarge would probably be way to small to load the model.

Thanks @philschmid , I think 4.17.0 was in the sample code. Is there any way to use bloom (or smaller versions of it) for token classification task / NER then? Through inferences API or other means?
Also maybe consider taking down the sample code if it's not possible to run. Thanks!

Sign up or log in to comment