Server error problem

Hello,

I am trying to test my fine tuned QWEN model. However, I am constantly gettin server error problem. I tried changing and creating a new dedicated deployment as well, but it did not work. The serverless api also does not work. I wonder what’s causing this problem?

PS: When i use SMOLVLM block with my trained model, it works fine. I wonder why qwen model has this problem?

  • Project Type: Multimodel
  • Operating System & Browser: Windows / Microsoft Edge
  • Project Universe Link or Workspace/Project ID: ice_first_project-mk76w/9

Can anyone help me please?

Hi @Aayushma_Sharma!
Happy to help here. The core issue here is that at the moment, Qwen2.5-VL is not fully supported within workflows due to its scale. This is on our team’s roadmap, although I cannot provide an exact timeline for when it will be fully supported.

Apologies for the confusion, happy building!!

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.