vllm.renderers.hf ¶
_PROCESSOR_CHAT_TEMPLATES module-attribute ¶
Used in _try_get_processor_chat_template to avoid calling cached_get_processor again if the processor fails to be loaded.
This is needed because lru_cache does not cache when an exception happens.
_cached_resolve_chat_template_kwargs module-attribute ¶
_cached_resolve_chat_template_kwargs = lru_cache(
_resolve_chat_template_kwargs
)
AssistantTracker ¶
Bases: Extension
Source code in vllm/renderers/hf.py
parse ¶
Source code in vllm/renderers/hf.py
HfRenderer ¶
Bases: BaseRenderer
Source code in vllm/renderers/hf.py
587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 | |
use_unified_vision_chunk instance-attribute ¶
use_unified_vision_chunk = getattr(
hf_config, "use_unified_vision_chunk", False
)
__init__ ¶
__init__(
config: ModelConfig, tokenizer_kwargs: dict[str, Any]
) -> None
Source code in vllm/renderers/hf.py
from_config classmethod ¶
from_config(
config: ModelConfig, tokenizer_kwargs: dict[str, Any]
) -> BaseRenderer
get_tokenizer ¶
get_tokenizer() -> HfTokenizer
render_messages ¶
render_messages(
messages: list[ChatCompletionMessageParam],
params: ChatParams,
) -> tuple[
list[ConversationMessage],
TextPrompt | TokensPrompt | EmbedsPrompt,
]
Source code in vllm/renderers/hf.py
render_messages_async async ¶
render_messages_async(
messages: list[ChatCompletionMessageParam],
params: ChatParams,
) -> tuple[
list[ConversationMessage],
TextPrompt | TokensPrompt | EmbedsPrompt,
]
Source code in vllm/renderers/hf.py
_detect_content_format cached ¶
_detect_content_format(
chat_template: str,
*,
default: ChatTemplateContentFormat,
) -> ChatTemplateContentFormat
Source code in vllm/renderers/hf.py
_get_hf_base_chat_template_params cached ¶
Source code in vllm/renderers/hf.py
_is_attr_access ¶
Source code in vllm/renderers/hf.py
_is_var_access ¶
_is_var_or_elems_access ¶
Source code in vllm/renderers/hf.py
_iter_nodes_assign_content_item ¶
Source code in vllm/renderers/hf.py
_iter_nodes_assign_messages_item ¶
Source code in vllm/renderers/hf.py
_iter_nodes_assign_var_or_elems ¶
_iter_nodes_assign_var_or_elems(root: Node, varname: str)
Source code in vllm/renderers/hf.py
_log_chat_template_content_format cached ¶
_log_chat_template_content_format(
chat_template: str | None,
given_format: ChatTemplateContentFormatOption,
detected_format: ChatTemplateContentFormatOption,
)
Source code in vllm/renderers/hf.py
_resolve_chat_template_content_format ¶
_resolve_chat_template_content_format(
chat_template: str | None,
tools: list[dict[str, Any]] | None,
tokenizer: HfTokenizer,
*,
model_config: ModelConfig,
) -> ChatTemplateContentFormat
Source code in vllm/renderers/hf.py
_resolve_chat_template_kwargs ¶
Source code in vllm/renderers/hf.py
_try_extract_ast ¶
_try_extract_ast(chat_template: str) -> Template | None
Source code in vllm/renderers/hf.py
_try_get_processor_chat_template ¶
_try_get_processor_chat_template(
tokenizer: HfTokenizer, *, trust_remote_code: bool
) -> str | None
Source code in vllm/renderers/hf.py
build_video_prompts_from_mm_data ¶
build_video_prompts_from_mm_data(
mm_data: MultiModalDataDict,
) -> list[str]
Build video prompts from vision_chunk data.
Collects prompts from video chunks and groups them by video_idx.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
mm_data | MultiModalDataDict | Processed multimodal data with vision_chunk items | required |
Returns:
| Type | Description |
|---|---|
list[str] | List of video prompts, one per video. |
Source code in vllm/renderers/hf.py
rebuild_mm_uuids_from_mm_data ¶
rebuild_mm_uuids_from_mm_data(
mm_uuids: MultiModalUUIDDict,
mm_data: MultiModalDataDict,
) -> MultiModalUUIDDict
Rebuild mm_uuids after vision_chunk processing.
When videos are split into chunks, the original UUIDs need to be updated to reflect the new UUIDs generated for each chunk.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
mm_uuids | MultiModalUUIDDict | Original UUIDs dictionary | required |
mm_data | MultiModalDataDict | Processed multimodal data with vision_chunk items | required |
Returns:
| Type | Description |
|---|---|
MultiModalUUIDDict | Updated UUIDs dictionary with chunk UUIDs |
Source code in vllm/renderers/hf.py
replace_vision_chunk_video_placeholder ¶
replace_vision_chunk_video_placeholder(
prompt_raw: str | list[int],
mm_data: MultiModalDataDict,
video_placeholder: str | None,
) -> str | list[int]
Source code in vllm/renderers/hf.py
resolve_chat_template ¶
resolve_chat_template(
tokenizer: HfTokenizer,
chat_template: str | None,
tools: list[dict[str, Any]] | None,
*,
model_config: ModelConfig,
) -> str | None
Source code in vllm/renderers/hf.py
resolve_chat_template_content_format ¶
resolve_chat_template_content_format(
chat_template: str | None,
tools: list[dict[str, Any]] | None,
given_format: ChatTemplateContentFormatOption,
tokenizer: HfTokenizer,
*,
model_config: ModelConfig,
) -> ChatTemplateContentFormat
Source code in vllm/renderers/hf.py
resolve_chat_template_kwargs ¶
resolve_chat_template_kwargs(
tokenizer: HfTokenizer,
chat_template: str,
chat_template_kwargs: dict[str, Any],
raise_on_unexpected: bool = True,
) -> dict[str, Any]
Source code in vllm/renderers/hf.py
safe_apply_chat_template ¶
safe_apply_chat_template(
model_config: ModelConfig,
tokenizer: HfTokenizer,
conversation: list[ConversationMessage],
*,
tools: list[dict[str, Any]] | None = None,
chat_template: str | None = None,
tokenize: bool = True,
**kwargs,
) -> str | list[int]