pub struct FabricTensor { /* private fields */ }Expand description
N-dimensional tensor backed by a fabric memory lease.
Tensor data is stored as a contiguous row-major f32 array. Each
constructor acquires a MemLease to validate
that the fabric has sufficient capacity. The Shape metadata tracks
dimensions and precomputed strides for multi-dimensional indexing.
§Storage layout
Data is laid out in row-major (C) order: the last dimension varies fastest
in memory. For a [2, 3] matrix [[a, b, c], [d, e, f]], the flat
representation is [a, b, c, d, e, f] with strides [3, 1].
§Examples
use grafos_tensor::FabricTensor;
// Create a 2x3 matrix and access elements
let t = FabricTensor::from_slice(&[2, 3], &[1.0, 2.0, 3.0, 4.0, 5.0, 6.0]).unwrap();
assert_eq!(t.get(&[1, 2]).unwrap(), 6.0);
assert_eq!(t.shape(), &[2, 3]);
assert_eq!(t.strides(), &[3, 1]);Implementations§
Source§impl FabricTensor
impl FabricTensor
Sourcepub fn lease(&self) -> &MemLease
pub fn lease(&self) -> &MemLease
Access the underlying fabric memory lease held by this tensor.
The current implementation stores element data in a local Vec<f32>,
but holds a MemLease as the capacity/liveness contract for the
tensor’s lifetime.
Sourcepub fn from_mem_lease(shape: &[usize], lease: MemLease) -> Self
pub fn from_mem_lease(shape: &[usize], lease: MemLease) -> Self
Wrap an existing memory lease as a tensor.
The tensor takes ownership of lease and uses it as its backing
memory contract. data is initialized to zeros.
Sourcepub fn zeros(shape: &[usize]) -> Result<Self>
pub fn zeros(shape: &[usize]) -> Result<Self>
Create a tensor filled with zeros.
Acquires a MemLease large enough for
numel * 4 bytes and initializes all elements to 0.0.
§Errors
Returns FabricError::CapacityExceeded if the fabric cannot provide
sufficient memory, or FabricError::Disconnected if the FBMU
handshake fails.
§Examples
use grafos_tensor::FabricTensor;
let t = FabricTensor::zeros(&[3, 4]).unwrap();
assert_eq!(t.shape(), &[3, 4]);
assert_eq!(t.numel(), 12);
assert_eq!(t.get(&[0, 0]).unwrap(), 0.0);Sourcepub fn from_slice(shape: &[usize], data: &[f32]) -> Result<Self>
pub fn from_slice(shape: &[usize], data: &[f32]) -> Result<Self>
Create a tensor from a slice of data.
The length of data must exactly equal the product of shape
dimensions. Data is copied into the tensor in row-major order.
§Errors
Returns FabricError::CapacityExceeded if data.len() does not
match the shape’s element count, or if the fabric cannot provide
sufficient memory.
§Examples
use grafos_tensor::FabricTensor;
let t = FabricTensor::from_slice(&[2, 2], &[1.0, 2.0, 3.0, 4.0]).unwrap();
assert_eq!(t.get(&[0, 1]).unwrap(), 2.0);
assert_eq!(t.get(&[1, 0]).unwrap(), 3.0);Sourcepub fn to_gpu(&self) -> Result<FabricTensor>
pub fn to_gpu(&self) -> Result<FabricTensor>
Move/copy this tensor to GPU placement.
On builds without the gpu feature, returns FabricError::Unsupported.
Sourcepub fn to_cpu(&self) -> Result<FabricTensor>
pub fn to_cpu(&self) -> Result<FabricTensor>
Move/copy this tensor to CPU placement.
Sourcepub fn get(&self, indices: &[usize]) -> Result<f32>
pub fn get(&self, indices: &[usize]) -> Result<f32>
Get the element at the given multi-dimensional indices.
The length of indices must equal ndim(), and each
index must be within its dimension’s range.
§Errors
Returns FabricError::CapacityExceeded if any index is out of
bounds or if the wrong number of indices is provided.
§Examples
use grafos_tensor::FabricTensor;
let t = FabricTensor::from_slice(&[2, 3], &[1.0, 2.0, 3.0, 4.0, 5.0, 6.0]).unwrap();
assert_eq!(t.get(&[0, 2]).unwrap(), 3.0);
assert_eq!(t.get(&[1, 1]).unwrap(), 5.0);
assert!(t.get(&[2, 0]).is_err()); // out of boundsSourcepub fn set(&mut self, indices: &[usize], value: f32) -> Result<()>
pub fn set(&mut self, indices: &[usize], value: f32) -> Result<()>
Set the element at the given multi-dimensional indices.
§Errors
Returns FabricError::CapacityExceeded if any index is out of
bounds or if the wrong number of indices is provided.
§Examples
use grafos_tensor::FabricTensor;
let mut t = FabricTensor::zeros(&[2, 2]).unwrap();
t.set(&[1, 0], 42.0).unwrap();
assert_eq!(t.get(&[1, 0]).unwrap(), 42.0);Sourcepub fn ndim(&self) -> usize
pub fn ndim(&self) -> usize
Number of dimensions (0 for scalar, 1 for vector, 2 for matrix, etc.).
Sourcepub fn matmul(&self, other: &FabricTensor) -> Result<FabricTensor>
pub fn matmul(&self, other: &FabricTensor) -> Result<FabricTensor>
Matrix multiplication (2-D only).
Computes C = self @ other using a triple-loop algorithm. Both tensors
must be 2-D. If self has shape [M, K] and other has shape
[K, N], the result has shape [M, N].
§Errors
Returns FabricError::CapacityExceeded if either tensor is not 2-D,
if the inner dimensions don’t match, or if the fabric cannot provide
memory for the result.
§Examples
use grafos_tensor::FabricTensor;
let a = FabricTensor::from_slice(&[2, 3], &[1.0, 2.0, 3.0, 4.0, 5.0, 6.0]).unwrap();
let b = FabricTensor::from_slice(&[3, 2], &[7.0, 8.0, 9.0, 10.0, 11.0, 12.0]).unwrap();
let c = a.matmul(&b).unwrap();
assert_eq!(c.shape(), &[2, 2]);
// C[0,0] = 1*7 + 2*9 + 3*11 = 58
assert_eq!(c.get(&[0, 0]).unwrap(), 58.0);Sourcepub fn add(&self, other: &FabricTensor) -> Result<FabricTensor>
pub fn add(&self, other: &FabricTensor) -> Result<FabricTensor>
Elementwise addition.
Both tensors must have the same shape. Returns a new tensor where each element is the sum of the corresponding elements.
Also available via the + operator: (&a + &b).unwrap().
§Errors
Returns FabricError::CapacityExceeded if shapes don’t match.
§Examples
use grafos_tensor::FabricTensor;
let a = FabricTensor::from_slice(&[2, 2], &[1.0, 2.0, 3.0, 4.0]).unwrap();
let b = FabricTensor::from_slice(&[2, 2], &[10.0, 20.0, 30.0, 40.0]).unwrap();
let c = a.add(&b).unwrap();
assert_eq!(c.as_slice(), &[11.0, 22.0, 33.0, 44.0]);Sourcepub fn mul(&self, other: &FabricTensor) -> Result<FabricTensor>
pub fn mul(&self, other: &FabricTensor) -> Result<FabricTensor>
Elementwise multiplication (Hadamard product).
Both tensors must have the same shape. Returns a new tensor where each element is the product of the corresponding elements.
Also available via the * operator: (&a * &b).unwrap().
§Errors
Returns FabricError::CapacityExceeded if shapes don’t match.
§Examples
use grafos_tensor::FabricTensor;
let a = FabricTensor::from_slice(&[3], &[2.0, 3.0, 4.0]).unwrap();
let b = FabricTensor::from_slice(&[3], &[5.0, 6.0, 7.0]).unwrap();
let c = a.mul(&b).unwrap();
assert_eq!(c.as_slice(), &[10.0, 18.0, 28.0]);Sourcepub fn scale(&self, scalar: f32) -> Result<FabricTensor>
pub fn scale(&self, scalar: f32) -> Result<FabricTensor>
Scalar multiplication.
Multiplies every element by scalar. Works on tensors of any shape.
Also available via the * operator with f32: (&a * 2.0f32).unwrap().
§Examples
use grafos_tensor::FabricTensor;
let a = FabricTensor::from_slice(&[2, 2], &[1.0, 2.0, 3.0, 4.0]).unwrap();
let b = a.scale(3.0).unwrap();
assert_eq!(b.as_slice(), &[3.0, 6.0, 9.0, 12.0]);Sourcepub fn relu(&self) -> Result<FabricTensor>
pub fn relu(&self) -> Result<FabricTensor>
ReLU activation: max(0, x) elementwise.
Returns a new tensor where negative values are clamped to zero. Commonly used as an activation function in neural network layers.
§Examples
use grafos_tensor::FabricTensor;
let a = FabricTensor::from_slice(&[4], &[-2.0, -0.5, 0.0, 3.0]).unwrap();
let b = a.relu().unwrap();
assert_eq!(b.as_slice(), &[0.0, 0.0, 0.0, 3.0]);Sourcepub fn softmax(&self, axis: usize) -> Result<FabricTensor>
pub fn softmax(&self, axis: usize) -> Result<FabricTensor>
Softmax along the specified axis.
For each slice along axis, computes exp(x - max) / sum(exp(x - max)).
The max-subtraction provides numerical stability against overflow.
For a [M, N] matrix: softmax(1) normalizes each row (sums to 1),
softmax(0) normalizes each column.
§Errors
Returns FabricError::CapacityExceeded if axis >= ndim().
§Examples
use grafos_tensor::FabricTensor;
let a = FabricTensor::from_slice(&[1, 4], &[1.0, 2.0, 3.0, 4.0]).unwrap();
let probs = a.softmax(1).unwrap();
let sum: f32 = probs.as_slice().iter().sum();
assert!((sum - 1.0).abs() < 1e-6);Sourcepub fn transpose(&self) -> Result<FabricTensor>
pub fn transpose(&self) -> Result<FabricTensor>
Transpose: swap the last two dimensions.
For a 2-D [M, N] matrix, produces [N, M]. For higher-rank tensors
(e.g. [B, M, N]), produces [B, N, M] by batching over leading
dimensions.
§Errors
Returns FabricError::CapacityExceeded if the tensor has fewer than
2 dimensions.
§Examples
use grafos_tensor::FabricTensor;
let a = FabricTensor::from_slice(&[2, 3], &[1.0, 2.0, 3.0, 4.0, 5.0, 6.0]).unwrap();
let b = a.transpose().unwrap();
assert_eq!(b.shape(), &[3, 2]);
assert_eq!(b.get(&[0, 1]).unwrap(), 4.0); // was a[1, 0]Sourcepub fn reshape(&self, new_shape: &[usize]) -> Result<FabricTensor>
pub fn reshape(&self, new_shape: &[usize]) -> Result<FabricTensor>
Reshape to a new shape.
The total number of elements must remain the same. Data is copied to a new tensor backed by a new lease; the underlying flat storage order does not change.
§Errors
Returns FabricError::CapacityExceeded if the new shape’s element
count differs from the current tensor’s.
§Examples
use grafos_tensor::FabricTensor;
let a = FabricTensor::from_slice(&[2, 3], &[1.0, 2.0, 3.0, 4.0, 5.0, 6.0]).unwrap();
let b = a.reshape(&[3, 2]).unwrap();
assert_eq!(b.shape(), &[3, 2]);
assert_eq!(b.as_slice(), a.as_slice()); // same data orderSourcepub fn subtract(&self, other: &FabricTensor) -> Result<FabricTensor>
pub fn subtract(&self, other: &FabricTensor) -> Result<FabricTensor>
Elementwise subtraction: self - other.
Both tensors must have the same shape.
§Examples
use grafos_tensor::FabricTensor;
let a = FabricTensor::from_slice(&[3], &[5.0, 3.0, 1.0]).unwrap();
let b = FabricTensor::from_slice(&[3], &[1.0, 2.0, 3.0]).unwrap();
let c = a.subtract(&b).unwrap();
assert_eq!(c.as_slice(), &[4.0, 1.0, -2.0]);Sourcepub fn sum_axis(&self, axis: usize) -> Result<FabricTensor>
pub fn sum_axis(&self, axis: usize) -> Result<FabricTensor>
Sum along the specified axis, reducing that dimension.
For a [M, N] matrix: sum_axis(0) produces [N] (column sums),
sum_axis(1) produces [M] (row sums).
§Errors
Returns FabricError::CapacityExceeded if axis >= ndim().
§Examples
use grafos_tensor::FabricTensor;
let a = FabricTensor::from_slice(&[2, 3], &[1.0, 2.0, 3.0, 4.0, 5.0, 6.0]).unwrap();
let row_sums = a.sum_axis(1).unwrap();
assert_eq!(row_sums.shape(), &[2]);
assert_eq!(row_sums.as_slice(), &[6.0, 15.0]);
let col_sums = a.sum_axis(0).unwrap();
assert_eq!(col_sums.shape(), &[3]);
assert_eq!(col_sums.as_slice(), &[5.0, 7.0, 9.0]);Sourcepub fn sigmoid(&self) -> Result<FabricTensor>
pub fn sigmoid(&self) -> Result<FabricTensor>
Elementwise sigmoid: 1 / (1 + exp(-x)).
§Examples
use grafos_tensor::FabricTensor;
let a = FabricTensor::from_slice(&[3], &[0.0, 100.0, -100.0]).unwrap();
let b = a.sigmoid().unwrap();
assert!((b.as_slice()[0] - 0.5).abs() < 1e-6);
assert!((b.as_slice()[1] - 1.0).abs() < 1e-4);
assert!(b.as_slice()[2] < 1e-4);Sourcepub fn ln(&self) -> Result<FabricTensor>
pub fn ln(&self) -> Result<FabricTensor>
Elementwise natural logarithm.
§Examples
use grafos_tensor::FabricTensor;
let a = FabricTensor::from_slice(&[3], &[1.0, core::f32::consts::E, 10.0]).unwrap();
let b = a.ln().unwrap();
assert!((b.as_slice()[0] - 0.0).abs() < 1e-6);
assert!((b.as_slice()[1] - 1.0).abs() < 1e-6);Sourcepub fn clip(&self, min: f32, max: f32) -> Result<FabricTensor>
pub fn clip(&self, min: f32, max: f32) -> Result<FabricTensor>
Elementwise clamp to [min, max].
§Examples
use grafos_tensor::FabricTensor;
let a = FabricTensor::from_slice(&[4], &[-1.0, 0.5, 1.5, 3.0]).unwrap();
let b = a.clip(0.0, 1.0).unwrap();
assert_eq!(b.as_slice(), &[0.0, 0.5, 1.0, 1.0]);Sourcepub fn fft(&self) -> Result<FabricTensor>
pub fn fft(&self) -> Result<FabricTensor>
Forward FFT of a 1-D real-valued tensor.
Input must be a 1-D tensor with a power-of-2 number of elements.
Returns a 1-D tensor of length 2*N containing interleaved [re, im]
pairs for each frequency bin.
On GPU-placed tensors (with gpu feature), dispatches through the GPU
submit path before computing the CPU result.
Sourcepub fn ifft(&self) -> Result<FabricTensor>
pub fn ifft(&self) -> Result<FabricTensor>
Inverse FFT returning a 1-D real-valued tensor.
Input must be a 1-D tensor of length 2*N containing interleaved
[re, im] pairs. N must be a power of 2. Returns a 1-D tensor
of N real samples.
On GPU-placed tensors (with gpu feature), dispatches through the GPU
submit path before computing the CPU result.
Trait Implementations§
Source§impl<'a> Add for &'a FabricTensor
Elementwise addition via the + operator.
impl<'a> Add for &'a FabricTensor
Elementwise addition via the + operator.
Usage: (&a + &b).unwrap()
Delegates to FabricTensor::add. Both tensors must have the same shape.
Source§type Output = Result<FabricTensor, FabricError>
type Output = Result<FabricTensor, FabricError>
+ operator.Source§impl Mul<f32> for &FabricTensor
Scalar multiplication via the * operator.
impl Mul<f32> for &FabricTensor
Scalar multiplication via the * operator.
Usage: (&a * 2.0f32).unwrap()
Delegates to FabricTensor::scale.
Source§impl<'a> Mul for &'a FabricTensor
Elementwise multiplication via the * operator.
impl<'a> Mul for &'a FabricTensor
Elementwise multiplication via the * operator.
Usage: (&a * &b).unwrap()
Delegates to FabricTensor::mul. Both tensors must have the same shape.
Source§type Output = Result<FabricTensor, FabricError>
type Output = Result<FabricTensor, FabricError>
* operator.Source§impl<'a> Sub for &'a FabricTensor
Elementwise subtraction via the - operator.
impl<'a> Sub for &'a FabricTensor
Elementwise subtraction via the - operator.
Usage: (&a - &b).unwrap()
Delegates to FabricTensor::subtract. Both tensors must have the same shape.
Source§type Output = Result<FabricTensor, FabricError>
type Output = Result<FabricTensor, FabricError>
- operator.