Tensor Instructions

Compared to JOLT instructions, JOLT Atlas operates on tensors. Most operations on tensors act entry wise, like ADD, while others such as MAX don't.In particular, each instruction specifies:

  • The program counter (PC) address of this instruction in the bytecode.

  • The operation code (opcode) that defines the instruction’s function.

  • Three input tensor operands, specified as the index of a node in the computation graph. These tensor operands are analogous to registers in RISC-V, as both indicate the source location of an operand.

    • The third input tensor operand, used by special opcodes such as Select.

  • The destination tensor index, i.e. the node index in the computation graph where the result will be stored. It is analogous to rd in RISC-V, indicating the write destination of the operation result.

  • An immediate value, if applicable to the instruction.

  • The number of virtual instructions remaining in a virtual sequence (see Section 6.2 of the Jolt paper).

  • The dimensions of the output tensor.

Note: Currently limited to rank-2 tensors; scaling for higher ranks is a planned improvement.

  • The number of active elements in the output tensor.

List of instructions

Op

Expression / Description

Input

$X(input)$

MatMul

A * B

Relu

$max(0, x)$

Sigmoid

$1 / (1 + e^{(-x)})$

Add

$A + B$

EinSum

$Cᵢⱼ = ∑ₖ Aᵢₖ Bₖⱼ$

Const

$c (const)$

RmAxis

$squeeze(X)$

Reshape

$reshape(X, new\_shape)$

Conv

$(X * K)(i, j) = ∑_{ₘ,ₙ} X(i + m, j + n) K(m, n)$

MaxPool

$Y(i, j) = max_{ₘ,ₙ ∈ window} X(i + m, j + n)$

Gather

$Y(i) = X(g(i))$

Softmax

$yᵢ = e^{(xᵢ)} / ∑ⱼ e^{(xⱼ)}$

Reduce

$y = ∑ᵢ x$ or $y = (1/n) ∑ᵢ xᵢ$

AddAxis

$Y = expand\_dims(X)$

Cast

$Y = cast(X, type)$

TypedBinOp

$C = A ∘ B$

ElementWiseOp

$Y = f(X)$

Last updated