vllm.multimodal.inputs ¶
AudioItem module-attribute ¶
Represents a single audio item, which can be passed to a HuggingFace AudioProcessor.
Alternatively, a tuple (audio, sampling_rate), where the sampling rate is different from that expected by the model; these are resampled to the model's sampling rate before being processed by HF.
Alternatively, a 3-D tensor or batch of 2-D tensors, which are treated as audio embeddings; these are directly passed to the model without HF processing.
BatchedTensorInputs module-attribute ¶
BatchedTensorInputs: TypeAlias = dict[str, NestedTensors]
A dictionary containing nested tensors which have been batched via MultiModalKwargsItems.get_data.
HfAudioItem module-attribute ¶
Represents a single audio item, which can be passed to a HuggingFace AudioProcessor.
HfImageItem module-attribute ¶
A transformers.image_utils.ImageInput representing a single image item, which can be passed to a HuggingFace ImageProcessor.
HfVideoItem module-attribute ¶
HfVideoItem: TypeAlias = Union[
list["Image"],
ndarray,
"torch.Tensor",
list[ndarray],
list["torch.Tensor"],
]
A transformers.image_utils.VideoInput representing a single video item, which can be passed to a HuggingFace VideoProcessor.
ImageItem module-attribute ¶
ImageItem: TypeAlias = Union[
HfImageItem, "torch.Tensor", MediaWithBytes[HfImageItem]
]
A transformers.image_utils.ImageInput representing a single image item, which can be passed to a HuggingFace ImageProcessor.
Alternatively, a 3-D tensor or batch of 2-D tensors, which are treated as image embeddings; these are directly passed to the model without HF processing.
ModalityData module-attribute ¶
Either a single data item, or a list of data items. Can only be None if UUID is provided.
The number of data items allowed per modality is restricted by --limit-mm-per-prompt.
MultiModalDataDict module-attribute ¶
MultiModalDataDict: TypeAlias = Mapping[
str, ModalityData[Any]
]
A dictionary containing an entry for each modality type to input.
The built-in modalities are defined by MultiModalDataBuiltins.
MultiModalHashes module-attribute ¶
A dictionary containing per-item hashes for each modality.
MultiModalKwargsOptionalItems module-attribute ¶
MultiModalKwargsOptionalItems: TypeAlias = (
MultiModalKwargsItems[MultiModalKwargsItem]
| MultiModalKwargsItems[MultiModalKwargsItem | None]
)
MultiModalPlaceholderDict module-attribute ¶
MultiModalPlaceholderDict: TypeAlias = Mapping[
str, Sequence[PlaceholderRange]
]
A dictionary containing per-item placeholder ranges for each modality.
MultiModalUUIDDict module-attribute ¶
A dictionary containing user-provided UUIDs for items in each modality. If a UUID for an item is not provided, its entry will be None and MultiModalHasher will compute a hash for the item.
The UUID will be used to identify the item for all caching purposes (input processing caching, embedding caching, prefix caching, etc).
NestedTensors module-attribute ¶
NestedTensors: TypeAlias = Union[
list["NestedTensors"],
list["torch.Tensor"],
"torch.Tensor",
tuple["torch.Tensor", ...],
]
Uses a list instead of a tensor if the dimensions of each element do not match.
VideoItem module-attribute ¶
VideoItem: TypeAlias = Union[
HfVideoItem,
"torch.Tensor",
tuple[HfVideoItem, dict[str, Any]],
]
A transformers.video_utils.VideoInput representing a single video item. This can be passed to a HuggingFace VideoProcessor with transformers.video_utils.VideoMetadata.
Alternatively, a 3-D tensor or batch of 2-D tensors, which are treated as video embeddings; these are directly passed to the model without HF processing.
VisionChunk module-attribute ¶
VisionChunk = VisionChunkImage | VisionChunkVideo
A vision chunk is either an image or a video chunk.
_I module-attribute ¶
_I = TypeVar(
"_I",
MultiModalKwargsItem,
MultiModalKwargsItem | None,
default=MultiModalKwargsItem,
)
BaseMultiModalField dataclass ¶
Bases: ABC
Defines how to interpret tensor data belonging to a keyword argument for MultiModalKwargsItems, and vice versa.
Source code in vllm/multimodal/inputs.py
keep_on_cpu class-attribute instance-attribute ¶
keep_on_cpu: bool = False
If True, then this field is excluded from being moved to the accelerator when MultiModalKwargsItems.get_data() is called to batch the data.
_field_factory ¶
_reduce_data abstractmethod ¶
_reduce_data(
batch: list[NestedTensors], *, pin_memory: bool
) -> NestedTensors
build_elems abstractmethod ¶
build_elems(
modality: str, key: str, data: NestedTensors
) -> Sequence[MultiModalFieldElem]
Construct MultiModalFieldElem instances to represent the provided data.
This is the inverse of reduce_data.
Source code in vllm/multimodal/inputs.py
reduce_data ¶
reduce_data(
elems: list[MultiModalFieldElem],
*,
device: Device = None,
pin_memory: bool = False,
) -> NestedTensors
Merge the data from multiple instances of MultiModalFieldElem.
This is the inverse of build_elems.
Source code in vllm/multimodal/inputs.py
MultiModalBatchedField dataclass ¶
Bases: BaseMultiModalField
Source code in vllm/multimodal/inputs.py
_reduce_data ¶
_reduce_data(
batch: list[NestedTensors], *, pin_memory: bool
) -> NestedTensors
Source code in vllm/multimodal/inputs.py
build_elems ¶
build_elems(
modality: str, key: str, data: NestedTensors
) -> Sequence[MultiModalFieldElem]
MultiModalDataBuiltins ¶
Bases: TypedDict
Type annotations for modality types predefined by vLLM.
Source code in vllm/multimodal/inputs.py
vision_chunk instance-attribute ¶
vision_chunk: ModalityData[VisionChunk]
The input visual atom(s) - unified modality for images and video chunks.
MultiModalEncDecInputs ¶
Bases: MultiModalInputs
Represents the outputs of EncDecMultiModalProcessor ready to be passed to vLLM internals.
Source code in vllm/multimodal/inputs.py
MultiModalFeatureSpec dataclass ¶
Represents a single multimodal input with its processed data and metadata.
Used to track multimodal data through processing and caching. A request containing multiple multimodal items will have one MultiModalFeatureSpec per item.
Source code in vllm/multimodal/inputs.py
data instance-attribute ¶
data: MultiModalKwargsItem | None
Represents multimodal data for this feature.
Can be None if the item is cached, to skip IPC between API server and engine core processes.
identifier instance-attribute ¶
identifier: str
The hash for caching encoder outputs (with LoRA prefix if applicable).
mm_hash class-attribute instance-attribute ¶
mm_hash: str | None = None
The hash for caching processor outputs (without LoRA prefix).
mm_position instance-attribute ¶
mm_position: PlaceholderRange
The location of the modality tokens corresponding to this item in the prompt, e.g., PlaceholderRange(offset=2, length=336).
__init__ ¶
__init__(
data: MultiModalKwargsItem | None,
modality: str,
identifier: str,
mm_position: PlaceholderRange,
mm_hash: str | None = None,
) -> None
gather_kwargs staticmethod ¶
gather_kwargs(
features: list[MultiModalFeatureSpec], keys: set[str]
)
Source code in vllm/multimodal/inputs.py
MultiModalFieldConfig dataclass ¶
Source code in vllm/multimodal/inputs.py
664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 | |
batched staticmethod ¶
Defines a field where an element in the batch is obtained by indexing into the first dimension of the underlying data.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
modality | str | The modality of the multi-modal item that uses this keyword argument. | required |
keep_on_cpu | bool | Whether to keep this field on the CPU for the model inputs. | False |
Example:
Source code in vllm/multimodal/inputs.py
build_elems ¶
build_elems(
key: str, batch: NestedTensors
) -> Sequence[MultiModalFieldElem]
flat staticmethod ¶
flat(
modality: str,
slices: Sequence[slice] | Sequence[Sequence[slice]],
dim: int = 0,
*,
keep_on_cpu: bool = False,
)
Defines a field where an element in the batch is obtained by slicing along the first dimension of the underlying data.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
modality | str | The modality of the multi-modal item that uses this keyword argument. | required |
slices | Sequence[slice] | Sequence[Sequence[slice]] | For each multi-modal item, a slice (dim=0) or a tuple of slices (dim>0) that is used to extract the data corresponding to it. | required |
dim | int | The dimension to extract data, default to 0. | 0 |
keep_on_cpu | bool | Whether to keep this field on the CPU for the model inputs. | False |
Example:
Given:
slices: [slice(0, 3), slice(3, 7), slice(7, 9)]
Input:
Data: [AAABBBBCC]
Output:
Element 1: [AAA]
Element 2: [BBBB]
Element 3: [CC]
Given:
slices: [
(slice(None), slice(0, 3)),
(slice(None), slice(3, 7)),
(slice(None), slice(7, 9))]
dim: 1
Input:
Data: [[A],[A],[A],[B],[B],[B],[B],[C],[C]]
Output:
Element 1: [[A],[A],[A]]
Element 2: [[B],[B],[B],[B]]
Element 3: [[C],[C]]
Source code in vllm/multimodal/inputs.py
flat_from_sizes staticmethod ¶
flat_from_sizes(
modality: str,
size_per_item: Tensor,
dim: int = 0,
*,
keep_on_cpu: bool = False,
)
Defines a field where an element in the batch is obtained by slicing along the first dimension of the underlying data.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
modality | str | The modality of the multi-modal item that uses this keyword argument. | required |
size_per_item | Tensor | For each multi-modal item, the size of the slice that is used to extract the data corresponding to it. | required |
dim | int | The dimension to slice, default to 0. | 0 |
keep_on_cpu | bool | Whether to keep this field on the CPU for the model inputs. | False |
Example:
Given:
size_per_item: [3, 4, 2]
Input:
Data: [AAABBBBCC]
Output:
Element 1: [AAA]
Element 2: [BBBB]
Element 3: [CC]
Given:
size_per_item: [3, 4, 2]
dim: 1
Input:
Data: [[A],[A],[A],[B],[B],[B],[B],[C],[C]]
Output:
Element 1: [[A],[A],[A]]
Element 2: [[B],[B],[B],[B]]
Element 3: [[C],[C]]
Source code in vllm/multimodal/inputs.py
shared staticmethod ¶
Defines a field where an element in the batch is obtained by taking the entirety of the underlying data.
This means that the data is the same for each element in the batch.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
modality | str | The modality of the multi-modal item that uses this keyword argument. | required |
batch_size | int | The number of multi-modal items which share this data. | required |
keep_on_cpu | bool | Whether to keep this field on the CPU for the model inputs. | False |
Example:
Given:
batch_size: 4
Input:
Data: [XYZ]
Output:
Element 1: [XYZ]
Element 2: [XYZ]
Element 3: [XYZ]
Element 4: [XYZ]
Source code in vllm/multimodal/inputs.py
MultiModalFieldElem dataclass ¶
Represents a processed keyword argument to pass to a model for a MultiModalKwargsItem.
Source code in vllm/multimodal/inputs.py
data instance-attribute ¶
data: NestedTensors
The tensor data of this field in MultiModalKwargsItem, i.e. the value of the keyword argument to be passed to the model.
It may be set to None if it is determined that the item is cached in EngineCore.
field instance-attribute ¶
field: BaseMultiModalField
Defines how to combine the tensor data of this field with others in order to batch multi-modal items together for model inference.
__eq__ ¶
Source code in vllm/multimodal/inputs.py
MultiModalFlatField dataclass ¶
Bases: BaseMultiModalField
Source code in vllm/multimodal/inputs.py
536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 | |
__init__ ¶
__init__(
*,
keep_on_cpu: bool = False,
slices: Sequence[slice] | Sequence[Sequence[slice]],
dim: int = 0,
) -> None
_reduce_data ¶
_reduce_data(
batch: list[NestedTensors], *, pin_memory: bool
) -> NestedTensors
Source code in vllm/multimodal/inputs.py
build_elems ¶
build_elems(
modality: str, key: str, data: NestedTensors
) -> Sequence[MultiModalFieldElem]
Source code in vllm/multimodal/inputs.py
MultiModalInputs ¶
Bases: TypedDict
Represents the outputs of BaseMultiModalProcessor, ready to be passed to vLLM internals.
Source code in vllm/multimodal/inputs.py
cache_salt instance-attribute ¶
cache_salt: NotRequired[str]
Optional cache salt to be used for prefix caching.
mm_kwargs instance-attribute ¶
mm_kwargs: MultiModalKwargsOptionalItems
Keyword arguments to be directly passed to the model after batching.
mm_placeholders instance-attribute ¶
mm_placeholders: MultiModalPlaceholderDict
For each modality, information about the placeholder tokens in prompt_token_ids.
prompt_token_ids instance-attribute ¶
The processed token IDs which includes placeholder tokens.
MultiModalKwargsItem ¶
Bases: UserDict[str, MultiModalFieldElem]
A dictionary of processed keyword arguments to pass to the model, corresponding to a single item in MultiModalDataItems.
Source code in vllm/multimodal/inputs.py
MultiModalKwargsItems ¶
Bases: UserDict[str, Sequence[_I]]
A dictionary of processed multi-modal inputs by modality.
For example, given a processor that processes images into pixel_values and image_grid_thw, and audios into input_audio_features, a prompt with 2 images and 1 audio will be processed into a MultiModalKwargsItems with the following structure:
MultiModalKwargsItems(
{
"image": [
# For the first image
MultiModalKwargsItem({"pixel_values": ..., "image_grid_thw": ...}),
# For the second imgae
MultiModalKwargsItem({"pixel_values": ..., "image_grid_thw": ...}),
],
"audio": [
# For the first audio
MultiModalKwargsItem({"input_audio_features": ...}),
],
}
)
Unlike HF processing which returns all items in a single dictionary with batched keyword arguments, we split up the items because some of them may already be cached. Also, items from multiple requests may be batched together to improve throughput, using the logic defined by the BaseMultiModalField for each keyword argument.
Source code in vllm/multimodal/inputs.py
913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 | |
__getitem__ ¶
Source code in vllm/multimodal/inputs.py
from_hf_inputs staticmethod ¶
from_hf_inputs(
hf_inputs: BatchFeature,
config_by_key: Mapping[str, MultiModalFieldConfig],
)
Source code in vllm/multimodal/inputs.py
get_data ¶
get_data(
*, device: Device = None, pin_memory: bool = False
) -> BatchedTensorInputs
Construct a dictionary of keyword arguments to pass to the model.
Source code in vllm/multimodal/inputs.py
require_data ¶
require_data() -> MultiModalKwargsItems[
MultiModalKwargsItem
]
Source code in vllm/multimodal/inputs.py
MultiModalSharedField dataclass ¶
Bases: BaseMultiModalField
Source code in vllm/multimodal/inputs.py
_reduce_data ¶
_reduce_data(
batch: list[NestedTensors], *, pin_memory: bool
) -> NestedTensors
build_elems ¶
build_elems(
modality: str, key: str, data: NestedTensors
) -> Sequence[MultiModalFieldElem]
PlaceholderRange dataclass ¶
Placeholder location information for multi-modal data.
Example:
Prompt: AAAA BBBB What is in these images?
Images A and B will have:
Source code in vllm/multimodal/inputs.py
165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 | |
is_embed class-attribute instance-attribute ¶
is_embed: Tensor | None = None
A boolean mask of shape (length,) indicating which positions between offset and offset + length to assign embeddings to.
__eq__ ¶
Source code in vllm/multimodal/inputs.py
extract_embeds_range ¶
Extract the start and end indices of the embedded region in prompt.
For example, given PlaceholderRange(offset=2, length=5) and is_embed = [False, True, False, True, True], the output is [(1 + offset, 1 + offset), (3 + offset, 4 + offset)].
Returns:
| Type | Description |
|---|---|
list[tuple[int, int]] | A tuple |
list[tuple[int, int]] | indices (inclusive) of the embedded region. |
list[tuple[int, int]] | Returns full placeholder range if |
Source code in vllm/multimodal/inputs.py
get_embeds_indices_in_range ¶
Returns the starting and ending indices of the embeddings of encoder outputs in the range of [start_idx, end_idx) in the placeholders.
For example, given: PlaceholderRange(offset=2, length=5, is_embed=[False, True, False, True, True])
If start_idx=3 and end_idx=5, the output is (1, 3) because we want to get the second and the third embeddings from the encoder output.
Source code in vllm/multimodal/inputs.py
VisionChunkImage ¶
Bases: TypedDict
Represents an image wrapped as a vision chunk.
Source code in vllm/multimodal/inputs.py
VisionChunkVideo ¶
_nested_tensors_h2d ¶
_nested_tensors_h2d(
tensors: NestedTensors, device: Device
) -> NestedTensors
Source code in vllm/multimodal/inputs.py
batched_tensors_equal ¶
batched_tensors_equal(
a: BatchedTensorInputs, b: BatchedTensorInputs
) -> bool
Equality check between BatchedTensorInputs objects.
Source code in vllm/multimodal/inputs.py
nested_tensors_equal ¶
nested_tensors_equal(
a: NestedTensors, b: NestedTensors
) -> bool
Equality check between NestedTensors objects.