You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs_sphinx/chapters/overview.rst
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -66,7 +66,7 @@ Finally, we measure the performance of our generated kernels across different si
66
66
Tensor Operation
67
67
----------------
68
68
69
-
This chapter introduces an additional layer of abstraction to code generation by describing higher-level tensor operations.
69
+
This chapter introduces an additional layer of abstraction to :doc:`code_generation` by describing higher-level tensor operations.
70
70
We therefore examine how to generate the correct kernel based on a provided tensor configuration object, i.e. the abstraction.
71
71
This object describes which operations on parameters, such as the size and type of dimensions, the execution type and the strides of the involved tensors, are required to generate and execute a kernel.
72
72
Furthermore, we also perform optimization passes such as primitive and shared identification, dimension splitting, dimension fusion and dimension reordering.
@@ -77,13 +77,13 @@ Einsum Tree
77
77
78
78
In this chapter, we introduce an additional layer of abstraction by defining a tree representation of multiple chained contractions on a set of two or more input tensors.
79
79
We therefore process a string representation of nested tensor operations alongside a list of the dimension sizes of the tensors used.
80
-
We then generate a tree representation from these input values, where each non-leaf node represents a single tensor operation. These operations are lowered to kernels, as described in the 'tensor_operations' chapter.
80
+
We then generate a tree representation from these input values, where each non-leaf node represents a single tensor operation. These operations are lowered to kernels, as described in the :doc:`tensor_operations` chapter.
81
81
Furthermore, we optimize this tree representation by performing optimization passes: Swap, Reorder and Permutation Insert on a node of the tree.
82
82
83
83
Individual Phase
84
84
----------------
85
85
86
-
In the final chapter, we developed a plan on how to further develop the project.
86
+
In the final chapter, :doc:`report_individual`, we developed a plan on how to further develop the project.
87
87
We created a draft to convert the project into a CMake library with a convenient tensor interface.
88
88
We then provide a step-by-step description of how we converted our project into a CMake library.
89
89
We also present our library interface, which defines a high-level tensor structure and operations such as unary, GEMM, contraction and Einsum expressions.
0 commit comments