Basic example of using the CUTLASS Python interface#

This notebook walks through a basic example of using the CUTLASS Python interface to declare, compile, and run GEMMs.

Open In Colab

We first import various packages needed for the example and construct the input and output tensors that will be used in our example.

[1]:
import numpy as np
import random

import cutlass

# This controls whether ther C++ GEMM declaration will be printed at each step. Set to `false` to
# omit this information.
print_module = True

m = 128
n = m
k = m

dtype = np.float16
type_A = np.float16
type_B = np.float16
type_C = np.float16
type_D = np.float16

np.random.seed(1234)
random.seed(1234)
scope_min = -4
scope_max = 4
tensor_A = np.ceil(np.random.uniform(low=scope_min, high=scope_max, size=(m, k)).astype(type_A))
tensor_B = np.ceil(np.random.uniform(low=scope_min, high=scope_max, size=(k, n)).astype(type_B))
tensor_C = np.ceil(np.random.uniform(low=scope_min, high=scope_max, size=(m, n)).astype(type_C))

alpha = np.float16(1.)
beta = np.float16(0.)

tensor_D = np.zeros(tensor_C.shape).astype(type_D)
/usr/local/lib/python3.8/dist-packages/tqdm/auto.py:22: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html
  from .autonotebook import tqdm as notebook_tqdm

Declaring and running a GEMM#

To get started, one only needs to provide the tensors declared above to the cutlass.op.Gemm call. This sets up a default GEMM operation for the given device on which you are running.

Assuming that we are running on SM80, this default to using a GEMM that leverages FP16 Tensor Core operations.

Calling plan.run() will generate the CUTLASS C++ kernel in question, compile it, and run it on the tensors we previously passed in. By setting print_module to true, the C++ code that is emitted is printed.

[2]:
# We specify `element_accumulator` here so as to match the kernel run by NumPy below. However,
# specifying `element_accumulator` is not required if it is the same as `element`
plan = cutlass.Gemm(element=dtype, layout=cutlass.LayoutType.RowMajor, element_accumulator=np.float32)
plan.run(tensor_A, tensor_B, tensor_C, tensor_D, print_module=print_module)

// Gemm operator cutlass_sm80_tensorop_f16_s16x8x16gemm_f16_1x1x1_256x128_64x3_tt_align8
using cutlass_sm80_tensorop_f16_s16x8x16gemm_f16_1x1x1_256x128_64x3_tt_align8_base =
  typename cutlass::gemm::kernel::DefaultGemmUniversal<
    cutlass::half_t, cutlass::layout::RowMajor, cutlass::ComplexTransform::kNone, 8,
    cutlass::half_t, cutlass::layout::RowMajor, cutlass::ComplexTransform::kNone, 8,
    cutlass::half_t, cutlass::layout::RowMajor,
    float,
    cutlass::arch::OpClassTensorOp,
    cutlass::arch::Sm80,
    cutlass::gemm::GemmShape<256, 128, 64>,
    cutlass::gemm::GemmShape<64, 64, 64>,
    cutlass::gemm::GemmShape<16, 8, 16>,
    cutlass::epilogue::thread::LinearCombination<cutlass::half_t, 8, float, float>,
    cutlass::gemm::threadblock::GemmIdentityThreadblockSwizzle<1>,
    3,
    cutlass::arch::OpMultiplyAdd
>::GemmKernel;

// Define named type
struct cutlass_sm80_tensorop_f16_s16x8x16gemm_f16_1x1x1_256x128_64x3_tt_align8_type :
  public cutlass_sm80_tensorop_f16_s16x8x16gemm_f16_1x1x1_256x128_64x3_tt_align8_base { };

[2]:
<cutlass.backend.gemm_operation.GemmArguments2x at 0x7f79cc556070>

There are many other ways to construct a plan from cutlass.op.Gemm (e.g., by specifiying they types and layouts of each operand, by providing representative tensors as inputs). For more details on these, see the documentation in the cutlass.op.Gemm constructor.

We then compare the output to running the GEMM using NumPy.

[3]:
tensor_D_numpy = (alpha * (tensor_A @ tensor_B)) + (beta * tensor_C)
np.testing.assert_array_equal(tensor_D, tensor_D_numpy)

Note that one could use the same kernel just declared for tensors provided by other frameworks beyond NumPy, such as PyTorch or CuPy.

Changing operation modes#

By default, the CUTLASS Python interface will try to use Tensor Core operations whenever possible. If the configuration provided to cutlass.op.Gemm is not supported on Tensor Cores, the interface will fall back to using a SIMT kernel.

The operation mode currently in use can be returned via the plan.opclass property. In this case Tensor Core operations.

[4]:
print(plan.opclass)
OpcodeClass.TensorOp

Suppose that we don’t want to use Tensor Cores for this GEMM. One can change to using CUTLASS’s SIMT GEMMs by setting the plan’s opclass field.

As is shown in the printed output, the emitted kernel uses template parameters that fit CUTLASS’s SIMT GEMMs.

Also notice that, this time around, we provided tensor parameters to plan.run(). One is free to provide different parameters to plan.run() than were passed in at the initial call to cutlass.op.Gemm, provided that the passed-in tensors have the same data type and layout as those passed in on intialization.

[5]:
tensor_D_simt = np.zeros(tensor_C.shape).astype(type_D)
plan.opclass = cutlass.OpcodeClass.Simt
plan.run(tensor_A, tensor_B, tensor_C, tensor_D_simt, alpha, beta, print_module=print_module)

// Gemm operator cutlass_sm80_simt_f16_sgemm_f16_1x1x1_128x128_8x2_tt_align1
using cutlass_sm80_simt_f16_sgemm_f16_1x1x1_128x128_8x2_tt_align1_base =
  typename cutlass::gemm::kernel::DefaultGemmUniversal<
    cutlass::half_t, cutlass::layout::RowMajor, cutlass::ComplexTransform::kNone, 1,
    cutlass::half_t, cutlass::layout::RowMajor, cutlass::ComplexTransform::kNone, 1,
    cutlass::half_t, cutlass::layout::RowMajor,
    float,
    cutlass::arch::OpClassSimt,
    cutlass::arch::Sm80,
    cutlass::gemm::GemmShape<128, 128, 8>,
    cutlass::gemm::GemmShape<32, 64, 8>,
    cutlass::gemm::GemmShape<1, 1, 1>,
    cutlass::epilogue::thread::LinearCombination<cutlass::half_t, 1, float, float>,
    cutlass::gemm::threadblock::GemmIdentityThreadblockSwizzle<1>,
    2,
    cutlass::arch::OpMultiplyAdd
>::GemmKernel;

// Define named type
struct cutlass_sm80_simt_f16_sgemm_f16_1x1x1_128x128_8x2_tt_align1_type :
  public cutlass_sm80_simt_f16_sgemm_f16_1x1x1_128x128_8x2_tt_align1_base { };

[5]:
<cutlass.backend.gemm_operation.GemmArguments2x at 0x7f7b3075abe0>

If we compare the output of the Tensor Core and SIMT GEMMs we just ran we see that they are equal.

[6]:
np.testing.assert_array_equal(tensor_D, tensor_D_simt)

Running cached kernels#

You may have noticed that the plan.run() calls for the previous two kernels took some time to execute. This is because the kernel being emitted had not yet been compiled.

CUTLASS caches compiled binaries so that recompilation isn’t necessary every time a kernel is run. For example, if we change modes back to using Tensor Cores and call plan.run() again (with a different set of tensor parameters), you’ll find the call to return much faster.

[7]:
m = 2400
n = 3232
k = 4096

tensor_A = np.ceil(np.random.uniform(low=scope_min, high=scope_max, size=(m, k)).astype(type_A))
tensor_B = np.ceil(np.random.uniform(low=scope_min, high=scope_max, size=(k, n)).astype(type_B))
tensor_C = np.ceil(np.random.uniform(low=scope_min, high=scope_max, size=(m, n)).astype(type_C))
tensor_D = np.zeros(tensor_C.shape).astype(type_D)

alpha = np.float16(1.)
beta = np.float16(2.)

plan.opclass = cutlass.OpcodeClass.TensorOp
plan.run(tensor_A, tensor_B, tensor_C, tensor_D, alpha, beta, print_module=print_module)

// Gemm operator cutlass_sm80_tensorop_f16_s16x8x16gemm_f16_1x1x1_256x128_64x3_tt_align8
using cutlass_sm80_tensorop_f16_s16x8x16gemm_f16_1x1x1_256x128_64x3_tt_align8_base =
  typename cutlass::gemm::kernel::DefaultGemmUniversal<
    cutlass::half_t, cutlass::layout::RowMajor, cutlass::ComplexTransform::kNone, 8,
    cutlass::half_t, cutlass::layout::RowMajor, cutlass::ComplexTransform::kNone, 8,
    cutlass::half_t, cutlass::layout::RowMajor,
    float,
    cutlass::arch::OpClassTensorOp,
    cutlass::arch::Sm80,
    cutlass::gemm::GemmShape<256, 128, 64>,
    cutlass::gemm::GemmShape<64, 64, 64>,
    cutlass::gemm::GemmShape<16, 8, 16>,
    cutlass::epilogue::thread::LinearCombination<cutlass::half_t, 8, float, float>,
    cutlass::gemm::threadblock::GemmIdentityThreadblockSwizzle<1>,
    3,
    cutlass::arch::OpMultiplyAdd
>::GemmKernel;

// Define named type
struct cutlass_sm80_tensorop_f16_s16x8x16gemm_f16_1x1x1_256x128_64x3_tt_align8_type :
  public cutlass_sm80_tensorop_f16_s16x8x16gemm_f16_1x1x1_256x128_64x3_tt_align8_base { };

[7]:
<cutlass.backend.gemm_operation.GemmArguments2x at 0x7f7b30fb9880>

Running non-default GEMMs#

The previous examples showed how it is simple to get started running a default GEMM kernel in CUTLASS. But, what do you do if you want a bit more control over the parameters to the GEMM?

Under the hood, CUTLASS enumerates the different GEMM configuration parameters possible for this kernel from the CUTLASS profiler. The code below shows how one can access the tile descriptions for the kernels (e.g., cluster, threadblock, and warp shape).

[8]:
tiles = plan.tile_descriptions()
print('{} tile descriptions returned'.format(len(tiles)))
num_print = 10
print('First {} tile descriptions are:'.format(num_print))
for td in tiles[:num_print]:
    print(td)
132 tile descriptions returned
First 10 tile descriptions are:

{
  ClusterShape: [1, 1, 1]
  ThreadblockShape: [256, 128, 64]
  WarpCount: [4, 2, 1]
  Stages: 3
  Kernel schedule: ScheduleAuto
}

{
  ClusterShape: [1, 1, 1]
  ThreadblockShape: [128, 256, 64]
  WarpCount: [2, 4, 1]
  Stages: 3
  Kernel schedule: ScheduleAuto
}

{
  ClusterShape: [1, 1, 1]
  ThreadblockShape: [256, 128, 64]
  WarpCount: [4, 2, 1]
  Stages: 3
  Kernel schedule: ScheduleAuto
}

{
  ClusterShape: [1, 1, 1]
  ThreadblockShape: [128, 256, 64]
  WarpCount: [2, 4, 1]
  Stages: 3
  Kernel schedule: ScheduleAuto
}

{
  ClusterShape: [1, 1, 1]
  ThreadblockShape: [256, 128, 32]
  WarpCount: [4, 2, 1]
  Stages: 3
  Kernel schedule: ScheduleAuto
}

{
  ClusterShape: [1, 1, 1]
  ThreadblockShape: [128, 256, 32]
  WarpCount: [2, 4, 1]
  Stages: 3
  Kernel schedule: ScheduleAuto
}

{
  ClusterShape: [1, 1, 1]
  ThreadblockShape: [256, 64, 64]
  WarpCount: [4, 1, 1]
  Stages: 4
  Kernel schedule: ScheduleAuto
}

{
  ClusterShape: [1, 1, 1]
  ThreadblockShape: [64, 256, 64]
  WarpCount: [1, 4, 1]
  Stages: 4
  Kernel schedule: ScheduleAuto
}

{
  ClusterShape: [1, 1, 1]
  ThreadblockShape: [128, 128, 64]
  WarpCount: [2, 2, 1]
  Stages: 4
  Kernel schedule: ScheduleAuto
}

{
  ClusterShape: [1, 1, 1]
  ThreadblockShape: [256, 64, 64]
  WarpCount: [4, 1, 1]
  Stages: 3
  Kernel schedule: ScheduleAuto
}

Next, we’ll pick one of these configurations at random and compile and run it.

[9]:
idx = random.randint(0, len(tiles)-1)
td = tiles[idx]
print('Tile description {} is: {}'.format(idx, td))
plan.compile(td)
plan.run(tensor_A, tensor_B, tensor_C, tensor_D, alpha, beta, print_module=print_module)
Tile description 112 is:
{
  ClusterShape: [1, 1, 1]
  ThreadblockShape: [128, 128, 32]
  WarpCount: [2, 2, 1]
  Stages: 4
  Kernel schedule: ScheduleAuto
}

// Gemm operator cutlass_sm80_tensorop_f16_s16x8x16gemm_f16_1x1x1_128x128_32x4_tt_align8
using cutlass_sm80_tensorop_f16_s16x8x16gemm_f16_1x1x1_128x128_32x4_tt_align8_base =
  typename cutlass::gemm::kernel::DefaultGemmUniversal<
    cutlass::half_t, cutlass::layout::RowMajor, cutlass::ComplexTransform::kNone, 8,
    cutlass::half_t, cutlass::layout::RowMajor, cutlass::ComplexTransform::kNone, 8,
    cutlass::half_t, cutlass::layout::RowMajor,
    float,
    cutlass::arch::OpClassTensorOp,
    cutlass::arch::Sm80,
    cutlass::gemm::GemmShape<128, 128, 32>,
    cutlass::gemm::GemmShape<64, 64, 32>,
    cutlass::gemm::GemmShape<16, 8, 16>,
    cutlass::epilogue::thread::LinearCombination<cutlass::half_t, 8, float, float>,
    cutlass::gemm::threadblock::GemmIdentityThreadblockSwizzle<1>,
    4,
    cutlass::arch::OpMultiplyAdd
>::GemmKernel;

// Define named type
struct cutlass_sm80_tensorop_f16_s16x8x16gemm_f16_1x1x1_128x128_32x4_tt_align8_type :
  public cutlass_sm80_tensorop_f16_s16x8x16gemm_f16_1x1x1_128x128_32x4_tt_align8_base { };

[9]:
<cutlass.backend.gemm_operation.GemmArguments2x at 0x7f79cc58de20>

One can also change the swizzling function used by the kernel. For example, one can modify the kernel to use the stream K feature of CUTLASS via:

[10]:
# Stream K is only supported pre-SM90 (at least when this example was written)
if plan.cc != 90:
    plan.swizzling_functor = cutlass.swizzle.ThreadblockSwizzleStreamK
    plan.run(tensor_A, tensor_B, tensor_C, tensor_D, alpha, beta, print_module=print_module)

// Gemm operator cutlass_sm80_tensorop_f16_s16x8x16gemm_f16_1x1x1_128x128_32x4_tt_align8
using cutlass_sm80_tensorop_f16_s16x8x16gemm_f16_1x1x1_128x128_32x4_tt_align8_base =
  typename cutlass::gemm::kernel::DefaultGemmUniversal<
    cutlass::half_t, cutlass::layout::RowMajor, cutlass::ComplexTransform::kNone, 8,
    cutlass::half_t, cutlass::layout::RowMajor, cutlass::ComplexTransform::kNone, 8,
    cutlass::half_t, cutlass::layout::RowMajor,
    float,
    cutlass::arch::OpClassTensorOp,
    cutlass::arch::Sm80,
    cutlass::gemm::GemmShape<128, 128, 32>,
    cutlass::gemm::GemmShape<64, 64, 32>,
    cutlass::gemm::GemmShape<16, 8, 16>,
    cutlass::epilogue::thread::LinearCombination<cutlass::half_t, 8, float, float>,
    cutlass::gemm::threadblock::ThreadblockSwizzleStreamK,
    4,
    cutlass::arch::OpMultiplyAdd
>::GemmKernel;

// Define named type
struct cutlass_sm80_tensorop_f16_s16x8x16gemm_f16_1x1x1_128x128_32x4_tt_align8_type :
  public cutlass_sm80_tensorop_f16_s16x8x16gemm_f16_1x1x1_128x128_32x4_tt_align8_base { };

Handling errors#

The CUTLASS Python interface attempts to catch runtime and compilation errors in Python so as to provide more understandable error messages.

Here’s an example in which we try to use too many stages for a given GEMM kernel. Normally, this would result in a runtime error due to the GPU having insufficient shared memory to launch the kernel with 8 stages. The CUTLASS Python interface is able to detect this issue before compiling the kernel, and reports it back to the user.

[11]:
# td = tiles[0]
# td.stages = 8
# plan.compile(td)