vllm.entrypoints.pooling.classify.protocol ¶
ClassificationRequest module-attribute ¶
ClassificationRequest: TypeAlias = (
ClassificationCompletionRequest
| ClassificationChatRequest
)
ClassificationChatRequest ¶
Bases: PoolingBasicRequestMixin, ChatRequestMixin, ClassifyRequestMixin
Source code in vllm/entrypoints/pooling/classify/protocol.py
mm_processor_kwargs class-attribute instance-attribute ¶
mm_processor_kwargs: dict[str, Any] | None = Field(
default=None,
description="Additional kwargs to pass to the HF processor.",
)
build_tok_params ¶
build_tok_params(
model_config: ModelConfig,
) -> TokenizeParams
Source code in vllm/entrypoints/pooling/classify/protocol.py
ClassificationCompletionRequest ¶
Bases: PoolingBasicRequestMixin, CompletionRequestMixin, ClassifyRequestMixin
Source code in vllm/entrypoints/pooling/classify/protocol.py
build_tok_params ¶
build_tok_params(
model_config: ModelConfig,
) -> TokenizeParams
Source code in vllm/entrypoints/pooling/classify/protocol.py
ClassificationData ¶
Bases: OpenAIBaseModel
Source code in vllm/entrypoints/pooling/classify/protocol.py
ClassificationResponse ¶
Bases: OpenAIBaseModel
Source code in vllm/entrypoints/pooling/classify/protocol.py
created class-attribute instance-attribute ¶
id class-attribute instance-attribute ¶
id: str = Field(
default_factory=lambda: f"classify-{random_uuid()}"
)