Sam, cogvlm http request internal error

Describe your question/issue here! (delete this when you post)

  • Project Type:
  • Operating System & Browser:
  • Project Universe Link or Workspace/Project ID:
Traceback (most recent call last):
  File "/home/aicads/miniconda3/envs/vlm/lib/python3.8/site-packages/inference_sdk/http/client.py", line 82, in decorate
    return function(*args, **kwargs)
  File "/home/aicads/miniconda3/envs/vlm/lib/python3.8/site-packages/inference_sdk/http/client.py", line 717, in prompt_cogvlm
    api_key_safe_raise_for_status(response=response)
  File "/home/aicads/miniconda3/envs/vlm/lib/python3.8/site-packages/inference_sdk/http/utils/requests.py", line 16, in api_key_safe_raise_for_status
    response.raise_for_status()
  File "/home/aicads/miniconda3/envs/vlm/lib/python3.8/site-packages/requests/models.py", line 1021, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http://localhost:9001/llm/cogvlm

sam and cogvlm got same error [500]

import requests

infer_payload = {
    "image": {
        "type": "base64",
        "value": "https://i.imgur.com/Q6lDy8B.jpg",
    },
    "image_id": "example_image_id",
}

base_url = "http://localhost:9001"

# Define your Roboflow API Key
api_key = "YOUR ROBOFLOW API KEY"

res = requests.post(
    f"{base_url}/sam/embed_image?api_key={api_key}",
    json=infer_payload,
)

embeddings = res.json()['embeddings']

I’m getting a 500 error even with the basic example, is the api-key the problem?

Just to clarify - you are populating your own API key in that variable, correct?

Could u try out this:

infer_payload = {
    "image": {
        "type": "url",
        "value": "https://t3.ftcdn.net/jpg/01/73/37/16/360_F_173371622_02A2qGqjhsJ5SWVhUPu0t9O9ezlfvF8l.jpg",
    },
    "image_id": "example_image_id",
}

base_url = "http://localhost:9001"

# Define your Roboflow API Key
API_KEY = "YOUR ROBOFLOW API KEY"

res = requests.post(
    f"{base_url}/sam/embed_image?api_key={API_KEY}",
    json=infer_payload,
)

embeddings = res.json()['embeddings']

This works for me

Yes. Even though I deleted and reissued the key, the problem arose.

@Jacob_Witt

The inference server is up and connected to its own local, but I’m wondering, I think I have 3 workspaces, all of them are public or sandbox plan, so I don’t know which API_KEY to choose, hmm what about the API_KEY if it’s its own server?

image

Could you share with us logs from the server
Use docker ps to find running containers id
Then docker logs -f <container_id> to retrieve logs

1 Like

You should use the API from the workspace where the model is hosted

1 Like

w/ @p.peczek

Sorry, I’m new to the infernece package and I’m sure I’m confused about something.
I installed inference server on my personal server, not in cloud form like roboflow, aws, etc.
So my thought is that the roboflow api-key has nothing to do with the 3 workspaces captured above, but how do I match the key… The model itself seems to be downloaded inside docker

In other words, what API_KEY could be associated with a workspace in internal Docker?

here is docker log

INFO:     172.17.0.1:35882 - "GET / HTTP/1.1" 304 Not Modified
Traceback (most recent call last):
  File "/app/inference/core/interfaces/http/http_api.py", line 163, in wrapped_route
    return await route(*args, **kwargs)
  File "/app/inference/core/interfaces/http/http_api.py", line 1079, in sam_embed_image
    model_response = await self.model_manager.infer_from_request(
  File "/app/inference/core/managers/decorators/fixed_size_cache.py", line 91, in infer_from_request
    return await super().infer_from_request(model_id, request, **kwargs)
  File "/app/inference/core/managers/decorators/base.py", line 69, in infer_from_request
    return await self.model_manager.infer_from_request(model_id, request, **kwargs)
  File "/app/inference/core/managers/active_learning.py", line 147, in infer_from_request
    prediction = await super().infer_from_request(
  File "/app/inference/core/managers/active_learning.py", line 35, in infer_from_request
    prediction = await super().infer_from_request(
  File "/app/inference/core/managers/base.py", line 95, in infer_from_request
    rtn_val = await self.model_infer(
  File "/app/inference/core/managers/base.py", line 152, in model_infer
    return self._models[model_id].infer_from_request(request)
  File "/app/inference/models/sam/segment_anything.py", line 134, in infer_from_request
    embedding, _ = self.embed_image(**request.dict())
  File "/app/inference/models/sam/segment_anything.py", line 110, in embed_image
    img_in = self.preproc_image(image)
  File "/app/inference/models/sam/segment_anything.py", line 181, in preproc_image
    np_image = load_image_rgb(image)
  File "/app/inference/core/utils/image_utils.py", line 42, in load_image_rgb
    np_image, is_bgr = load_image(
  File "/app/inference/core/utils/image_utils.py", line 81, in load_image
    np_image, is_bgr = load_image_with_known_type(
  File "/app/inference/core/utils/image_utils.py", line 164, in load_image_with_known_type
    image = loader(value, cv_imread_flags)
  File "/app/inference/core/utils/image_utils.py", line 255, in load_image_base64
    value = pybase64.b64decode(value)
binascii.Error: Incorrect padding
INFO:     172.17.0.1:40558 - "POST /sam/embed_image?api_key=2pKBv3TsNPQAmJm6AMzm HTTP/1.1" 500 Internal Server Error

Thank you

ok, this is something I expected, as our example has a bug.

error binascii.Error: Incorrect padding comes from:

    "image": {
        "type": "base64",
        "value": "https://i.imgur.com/Q6lDy8B.jpg",
    },

where we wrongly suggested to provide image type (which is link) to be type base64. The image itself also does not load due to hosting issue.

I provided you this example:

infer_payload = {
    "image": {
        "type": "url",
        "value": "https://t3.ftcdn.net/jpg/01/73/37/16/360_F_173371622_02A2qGqjhsJ5SWVhUPu0t9O9ezlfvF8l.jpg",
    },
    "image_id": "example_image_id",
}

base_url = "http://localhost:9001"

# Define your Roboflow API Key
API_KEY = "YOUR ROBOFLOW API KEY"

res = requests.post(
    f"{base_url}/sam/embed_image?api_key={API_KEY}",
    json=infer_payload,
)

embeddings = res.json()['embeddings']

could you please try it out?

1 Like

working well…

But I’m curious, doesn’t the api-key have any meaning when doing it locally?