Skip to content

Commit 160da40

Browse files
committed
doc: added links to chapters
1 parent 158ec0c commit 160da40

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

docs_sphinx/chapters/overview.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,7 @@ Finally, we measure the performance of our generated kernels across different si
6666
Tensor Operation
6767
----------------
6868

69-
This chapter introduces an additional layer of abstraction to code generation by describing higher-level tensor operations.
69+
This chapter introduces an additional layer of abstraction to :doc:`code_generation` by describing higher-level tensor operations.
7070
We therefore examine how to generate the correct kernel based on a provided tensor configuration object, i.e. the abstraction.
7171
This object describes which operations on parameters, such as the size and type of dimensions, the execution type and the strides of the involved tensors, are required to generate and execute a kernel.
7272
Furthermore, we also perform optimization passes such as primitive and shared identification, dimension splitting, dimension fusion and dimension reordering.
@@ -77,13 +77,13 @@ Einsum Tree
7777

7878
In this chapter, we introduce an additional layer of abstraction by defining a tree representation of multiple chained contractions on a set of two or more input tensors.
7979
We therefore process a string representation of nested tensor operations alongside a list of the dimension sizes of the tensors used.
80-
We then generate a tree representation from these input values, where each non-leaf node represents a single tensor operation. These operations are lowered to kernels, as described in the 'tensor_operations' chapter.
80+
We then generate a tree representation from these input values, where each non-leaf node represents a single tensor operation. These operations are lowered to kernels, as described in the :doc:`tensor_operations` chapter.
8181
Furthermore, we optimize this tree representation by performing optimization passes: Swap, Reorder and Permutation Insert on a node of the tree.
8282

8383
Individual Phase
8484
----------------
8585

86-
In the final chapter, we developed a plan on how to further develop the project.
86+
In the final chapter, :doc:`report_individual`, we developed a plan on how to further develop the project.
8787
We created a draft to convert the project into a CMake library with a convenient tensor interface.
8888
We then provide a step-by-step description of how we converted our project into a CMake library.
8989
We also present our library interface, which defines a high-level tensor structure and operations such as unary, GEMM, contraction and Einsum expressions.

0 commit comments

Comments
 (0)