VEXPANDPD - EXPAND Packed Double

VEXPANDPD xmm1{k1}{z}, xmm2/m128    (V5+VL
__m128d _mm_mask_expand_pd(__m128d s, __mmask8 k, __m128d a)
__m128d _mm_maskz_expand_pd(__mmask8 k, __m128d a)
__m128d _mm_mask_expandloadu_pd(__m128d s, __mmask8 k, void* p)
__m128d _mm_maskz_expandloadu_pd(__mmask8 k, void* p)

If each bit of (2) is set, the corresponding element of (3) is copied from (1), taken one by one from the lowest element.
If corresponding bit of (2) is not set, (3) element is:
  zero cleared if {z} is specified (_maskz_ intrinsic is used)
  left unchanged if {z} is not specified. (copied from s if _mask_ intrinsic is used.)
VEXPANDPD ymm1{k1}{z}, ymm2/m256    (V5+VL
__m256d _mm256_mask_expand_pd(__m256d s, __mmask8 k, __m256d a)
__m256d _mm256_maskz_expand_pd(__mmask8 k, __m256d a)
__m256d _mm256_mask_expandloadu_pd(__m256d s, __mmask8 k, void* p)
__m256d _mm256_maskz_expandloadu_pd(__mmask8 k, void* p)

If each bit of (2) is set, the corresponding element of (3) is copied from (1), taken one by one from the lowest element.
If corresponding bit of (2) is not set, (3) element is:
  zero cleared if {z} is specified (_maskz_ intrinsic is used)
  left unchanged if {z} is not specified. (copied from s if _mask_ intrinsic is used.)
VEXPANDPD zmm1{k1}{z}, zmm2/m512    (V5
__m512d _mm512_mask_expand_pd(__m512d s, __mmask8 k, __m512d a)
__m512d _mm512_maskz_expand_pd(__mmask8 k, __m512d a)
__m512d _mm512_mask_expandloadu_pd(__m512d s, __mmask8 k, void* p)
__m512d _mm512_maskz_expandloadu_pd(__mmask8 k, void* p)

If each bit of (2) is set, the corresponding element of (3) is copied from (1), taken one by one from the lowest element.
If corresponding bit of (2) is not set, (3) element is:
  zero cleared if {z} is specified (_maskz_ intrinsic is used)
  left unchanged if {z} is not specified. (copied from s if _mask_ intrinsic is used.)

x86/x64 SIMD Instruction List  Feedback