vllm.model_executor.layers.fused_moe.xpu_fused_moe ¶
XPUExperts ¶
Bases: FusedMoEPermuteExpertsUnpermute
Source code in vllm/model_executor/layers/fused_moe/xpu_fused_moe.py
22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 | |
_supports_activation staticmethod ¶
_supports_parallel_config staticmethod ¶
_supports_parallel_config(
moe_parallel_config: FusedMoEParallelConfig,
) -> bool
_supports_quant_scheme staticmethod ¶
Source code in vllm/model_executor/layers/fused_moe/xpu_fused_moe.py
activation_format staticmethod ¶
activation_format() -> FusedMoEActivationFormat
apply ¶
apply(
output: Tensor,
hidden_states: Tensor,
w1: Tensor,
w2: Tensor,
topk_weights: Tensor,
topk_ids: Tensor,
activation: str,
global_num_experts: int,
expert_map: Tensor | None,
a1q_scale: Tensor | None,
a2_scale: Tensor | None,
workspace13: Tensor,
workspace2: Tensor,
expert_tokens_meta: ExpertTokensMetadata | None,
apply_router_weight_on_input: bool,
)
Source code in vllm/model_executor/layers/fused_moe/xpu_fused_moe.py
finalize_weight_and_reduce_impl ¶
finalize_weight_and_reduce_impl() -> TopKWeightAndReduce
workspace_shapes ¶
workspace_shapes(
M: int,
N: int,
K: int,
topk: int,
global_num_experts: int,
local_num_experts: int,
expert_tokens_meta: ExpertTokensMetadata | None,
activation: str,
) -> tuple[
tuple[int, ...], tuple[int, ...], tuple[int, ...]
]