vllm.model_executor.layers.quantization.utils.nvfp4_utils ¶
NvFp4LinearBackend ¶
apply_nvfp4_linear ¶
apply_nvfp4_linear(
backend: NvFp4LinearBackend,
layer: Module,
x: Tensor,
bias: Tensor | None = None,
) -> Tensor
Apply NVFP4 linear transformation using the specified backend.
Source code in vllm/model_executor/layers/quantization/utils/nvfp4_utils.py
170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 | |
convert_to_nvfp4_linear_kernel_format ¶
convert_to_nvfp4_linear_kernel_format(
backend: NvFp4LinearBackend, layer: Module
) -> None
Convert layer to NVFP4 linear kernel format.
Source code in vllm/model_executor/layers/quantization/utils/nvfp4_utils.py
cutlass_fp4_supported ¶
cutlass_fp4_supported() -> bool
Source code in vllm/model_executor/layers/quantization/utils/nvfp4_utils.py
pad_nvfp4_activation_for_cutlass ¶
Pad packed FP4 activations to match the K-dimension padding applied to weights. The padding is in bytes (tensor dimension), not FP4 elements.
Source code in vllm/model_executor/layers/quantization/utils/nvfp4_utils.py
pad_nvfp4_weight_for_cutlass ¶
Pad packed NVFP4 weights so that both N (rows) and K (columns) satisfy the alignment constraints required by CUTLASS / FlashInfer FP4 kernels.
CUTLASS FP4 kernel requires both K and N matrix dimensions to be divisible by 32 for aligned memory access and efficient tensor core operations.
Source code in vllm/model_executor/layers/quantization/utils/nvfp4_utils.py
prepare_weights_for_nvfp4_cutlass ¶
prepare_weights_for_nvfp4_cutlass(
weight: Tensor, weight_scale: Tensor
) -> tuple[Tensor, Tensor, int]
Prepare weights and scales for CUTLASS/FlashInfer-CUTLASS FP4 GEMM. This involves padding weights for alignment (K and N divisible by 32)
Source code in vllm/model_executor/layers/quantization/utils/nvfp4_utils.py
prepare_weights_for_nvfp4_fbgemm ¶
Prepare weights and scales for FBGEMM FP4 GEMM.
Source code in vllm/model_executor/layers/quantization/utils/nvfp4_utils.py
prepare_weights_for_nvfp4_flashinfer_trtllm ¶
prepare_weights_for_nvfp4_flashinfer_trtllm(
weight: Tensor, weight_scale: Tensor
) -> tuple[Tensor, Tensor]
Prepare weights and scales for FlashInfer TRTLLM FP4 GEMM.
Source code in vllm/model_executor/layers/quantization/utils/nvfp4_utils.py
select_nvfp4_linear_backend ¶
select_nvfp4_linear_backend() -> NvFp4LinearBackend
Select the best available NVFP4 GEMM backend based on environment configuration and platform capabilities.
Source code in vllm/model_executor/layers/quantization/utils/nvfp4_utils.py
slice_nvfp4_output ¶
Slice the output tensor to remove padding in N dimension if weight was padded.
Source code in vllm/model_executor/layers/quantization/utils/nvfp4_utils.py
swizzle_blockscale ¶
Pad and block-interleave the FP4 block-scales so that they match the data layout expected by the CUTLASS / FlashInfer kernels.
Parameters¶
scale: torch.Tensor
Returns¶
torch.Tensor The swizzled tensor with the same logical shape as scale.