vllm.multimodal ¶
Modules:
| Name | Description |
|---|---|
audio | |
budget | |
cache | |
evs | |
hasher | |
image | |
inputs | |
media | |
parse | |
processing | |
registry | |
utils | |
video | |
BatchedTensorInputs module-attribute ¶
BatchedTensorInputs: TypeAlias = dict[str, NestedTensors]
A dictionary containing nested tensors which have been batched via MultiModalKwargsItems.get_data.
MULTIMODAL_REGISTRY module-attribute ¶
MULTIMODAL_REGISTRY = MultiModalRegistry()
The global MultiModalRegistry is used by model runners to dispatch data processing according to the target model.
Info
ModalityData module-attribute ¶
Either a single data item, or a list of data items. Can only be None if UUID is provided.
The number of data items allowed per modality is restricted by --limit-mm-per-prompt.
MultiModalDataDict module-attribute ¶
MultiModalDataDict: TypeAlias = Mapping[
str, ModalityData[Any]
]
A dictionary containing an entry for each modality type to input.
The built-in modalities are defined by MultiModalDataBuiltins.
MultiModalPlaceholderDict module-attribute ¶
MultiModalPlaceholderDict: TypeAlias = Mapping[
str, Sequence[PlaceholderRange]
]
A dictionary containing per-item placeholder ranges for each modality.
MultiModalUUIDDict module-attribute ¶
A dictionary containing user-provided UUIDs for items in each modality. If a UUID for an item is not provided, its entry will be None and MultiModalHasher will compute a hash for the item.
The UUID will be used to identify the item for all caching purposes (input processing caching, embedding caching, prefix caching, etc).
NestedTensors module-attribute ¶
NestedTensors: TypeAlias = Union[
list["NestedTensors"],
list["torch.Tensor"],
"torch.Tensor",
tuple["torch.Tensor", ...],
]
Uses a list instead of a tensor if the dimensions of each element do not match.
__all__ module-attribute ¶
__all__ = [
"BatchedTensorInputs",
"ModalityData",
"MultiModalDataBuiltins",
"MultiModalDataDict",
"MultiModalHasher",
"MultiModalKwargsItems",
"MultiModalPlaceholderDict",
"MultiModalUUIDDict",
"NestedTensors",
"MULTIMODAL_REGISTRY",
"MultiModalRegistry",
]
MultiModalDataBuiltins ¶
Bases: TypedDict
Type annotations for modality types predefined by vLLM.
Source code in vllm/multimodal/inputs.py
vision_chunk instance-attribute ¶
vision_chunk: ModalityData[VisionChunk]
The input visual atom(s) - unified modality for images and video chunks.
MultiModalHasher ¶
Source code in vllm/multimodal/hasher.py
50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 | |
hash_kwargs classmethod ¶
Source code in vllm/multimodal/hasher.py
iter_item_to_bytes classmethod ¶
iter_item_to_bytes(
key: str, obj: object
) -> Iterable[bytes | memoryview]
Source code in vllm/multimodal/hasher.py
serialize_item classmethod ¶
serialize_item(obj: object) -> Iterable[bytes | memoryview]
Source code in vllm/multimodal/hasher.py
MultiModalKwargsItems ¶
Bases: UserDict[str, Sequence[_I]]
A dictionary of processed multi-modal inputs by modality.
For example, given a processor that processes images into pixel_values and image_grid_thw, and audios into input_audio_features, a prompt with 2 images and 1 audio will be processed into a MultiModalKwargsItems with the following structure:
MultiModalKwargsItems(
{
"image": [
# For the first image
MultiModalKwargsItem({"pixel_values": ..., "image_grid_thw": ...}),
# For the second imgae
MultiModalKwargsItem({"pixel_values": ..., "image_grid_thw": ...}),
],
"audio": [
# For the first audio
MultiModalKwargsItem({"input_audio_features": ...}),
],
}
)
Unlike HF processing which returns all items in a single dictionary with batched keyword arguments, we split up the items because some of them may already be cached. Also, items from multiple requests may be batched together to improve throughput, using the logic defined by the BaseMultiModalField for each keyword argument.
Source code in vllm/multimodal/inputs.py
913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 | |
__getitem__ ¶
Source code in vllm/multimodal/inputs.py
from_hf_inputs staticmethod ¶
from_hf_inputs(
hf_inputs: BatchFeature,
config_by_key: Mapping[str, MultiModalFieldConfig],
)
Source code in vllm/multimodal/inputs.py
get_data ¶
get_data(
*, device: Device = None, pin_memory: bool = False
) -> BatchedTensorInputs
Construct a dictionary of keyword arguments to pass to the model.
Source code in vllm/multimodal/inputs.py
require_data ¶
require_data() -> MultiModalKwargsItems[
MultiModalKwargsItem
]
Source code in vllm/multimodal/inputs.py
MultiModalRegistry ¶
A registry that dispatches data processing according to the model.
Source code in vllm/multimodal/registry.py
97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 | |
_create_processing_ctx ¶
_create_processing_ctx(
model_config: ModelConfig,
observability_config: ObservabilityConfig | None = None,
tokenizer: TokenizerLike | None = None,
) -> InputProcessingContext
Source code in vllm/multimodal/registry.py
_create_processing_info ¶
_create_processing_info(
model_config: ModelConfig,
observability_config: ObservabilityConfig | None = None,
*,
tokenizer: TokenizerLike | None = None,
) -> BaseProcessingInfo
Source code in vllm/multimodal/registry.py
_extract_mm_options ¶
_extract_mm_options(
model_config: ModelConfig,
) -> Mapping[str, BaseDummyOptions] | None
Extract multimodal dummy options from model config.
Returns None if no configurable options are found, otherwise returns a mapping of modality names to their dummy options.
Source code in vllm/multimodal/registry.py
_get_cache_type ¶
_get_cache_type(
vllm_config: VllmConfig,
) -> Literal[None, "processor_only", "lru", "shm"]
Source code in vllm/multimodal/registry.py
_get_model_cls ¶
_get_model_cls(
model_config: ModelConfig,
) -> SupportsMultiModal
Source code in vllm/multimodal/registry.py
create_processor ¶
create_processor(
model_config: ModelConfig,
observability_config: ObservabilityConfig | None = None,
*,
tokenizer: TokenizerLike | None = None,
cache: BaseMultiModalProcessorCache | None = None,
) -> BaseMultiModalProcessor[BaseProcessingInfo]
Create a multi-modal processor for a specific model and tokenizer.
Source code in vllm/multimodal/registry.py
engine_receiver_cache_from_config ¶
engine_receiver_cache_from_config(
vllm_config: VllmConfig,
) -> BaseMultiModalReceiverCache | None
Return a BaseMultiModalReceiverCache for the engine process.
Source code in vllm/multimodal/registry.py
get_dummy_mm_inputs ¶
get_dummy_mm_inputs(
model_config: ModelConfig,
mm_counts: Mapping[str, int],
*,
cache: BaseMultiModalProcessorCache | None = None,
processor: BaseMultiModalProcessor | None = None,
) -> MultiModalInputs
Create dummy data for profiling the memory usage of a model.
The model is identified by model_config.
Source code in vllm/multimodal/registry.py
processor_cache_from_config ¶
processor_cache_from_config(
vllm_config: VllmConfig,
) -> BaseMultiModalProcessorCache | None
Return a BaseMultiModalProcessorCache, if enabled.
Source code in vllm/multimodal/registry.py
processor_only_cache_from_config ¶
processor_only_cache_from_config(
vllm_config: VllmConfig,
) -> MultiModalProcessorOnlyCache | None
Return a MultiModalProcessorOnlyCache, if enabled.
Source code in vllm/multimodal/registry.py
register_processor ¶
register_processor(
processor: MultiModalProcessorFactory[_I],
*,
info: ProcessingInfoFactory[_I],
dummy_inputs: DummyInputsBuilderFactory[_I],
)
Register a multi-modal processor to a model class. The processor is constructed lazily, hence a factory method should be passed.
When the model receives multi-modal data, the provided function is invoked to transform the data into a dictionary of model inputs.
Source code in vllm/multimodal/registry.py
supports_multimodal_inputs ¶
supports_multimodal_inputs(
model_config: ModelConfig,
) -> bool
Checks if the model supports multimodal inputs. Returns True if the model is multimodal with any non-zero supported modalities, otherwise returns False, effectively running in text-only mode.
Source code in vllm/multimodal/registry.py
worker_receiver_cache_from_config ¶
worker_receiver_cache_from_config(
vllm_config: VllmConfig, shared_worker_lock: Lock
) -> BaseMultiModalReceiverCache | None
Return a BaseMultiModalReceiverCache for the worker process.