Latest update Android YouTube

Linear Algebra For AIML - Chapter 10 Tensor Basics

Chapter 10 — Tensor Basics

Tensors are the core data structure in modern AI & ML frameworks. They generalize scalars, vectors, and matrices to multiple dimensions, enabling deep learning and high-dimensional data computations.

10.1 What is a Tensor?

A tensor is a multi-dimensional array of numbers. Tensors generalize:

  • Scalars → 0D tensors (single number, e.g., 5)
  • Vectors → 1D tensors (e.g., [1, 2, 3])
  • Matrices → 2D tensors (e.g., [[1,2],[3,4]])
  • Higher-dimensional arrays → 3D, 4D tensors (used in images, videos, and batches)

AI/ML Context: Deep learning frameworks like TensorFlow and PyTorch operate primarily on tensors. Every input, weight, gradient, or feature map is a tensor.

10.2 Tensor Shapes & Dimensions

- Rank: Number of dimensions of a tensor. A scalar has rank 0, a vector rank 1, a matrix rank 2, etc.
- Shape: Size along each dimension. Example: a 3x4 matrix → shape = (3,4).

Example:

import torch

# scalar
x = torch.tensor(5)        # rank 0
# vector
v = torch.tensor([1,2,3])  # rank 1, shape (3,)
# matrix
M = torch.tensor([[1,2],[3,4]])  # rank 2, shape (2,2)
# 3D tensor
T = torch.randn(2,3,4)     # rank 3, shape (2,3,4)

10.3 Basic Tensor Operations

Most tensor operations generalize familiar vector and matrix operations:

  • Addition & subtraction → element-wise
  • Scalar multiplication → multiply each element
  • Matrix multiplication / dot product → higher-dimensional equivalents using matmul
  • Reshape, transpose, slicing → manipulate data without copying

# Tensor operations
A = torch.tensor([[1,2],[3,4]])
B = torch.tensor([[2,0],[1,3]])

# addition
C = A + B

# scalar multiplication
D = 3 * A

# matrix multiplication
E = torch.matmul(A, B)

print(C)
print(D)
print(E)

10.4 Why Tensors are Important in AI/ML

Tensors are critical because they allow:

  • Batch processing → process multiple inputs simultaneously
  • GPU acceleration → highly parallelizable computations
  • Representation of multi-dimensional data → images (H×W×C), videos (B×H×W×C), and text embeddings
  • Automatic differentiation → compute gradients efficiently for training neural networks

10.5 Quick PyTorch Example (Practical)

import torch

# create a 3D tensor representing a batch of 2 RGB images (3x3 pixels each)
images = torch.randn(2, 3, 3, 3)  # shape (batch, channels, height, width)

# sum across channels
channel_sum = images.sum(dim=1)

# flatten images for a linear layer
flattened = images.view(images.size(0), -1)

print("Original shape:", images.shape)
print("After sum over channels:", channel_sum.shape)
print("Flattened for model input:", flattened.shape)

10.6 Geometric & Intuitive Understanding

- 0D → single point
- 1D → line of points
- 2D → grid (image)
- 3D → multiple images or volumetric data
- 4D+ → batches of sequences or video frames

10.7 AI/ML Use Cases (Why Tensors Matter)

  • Deep Learning: Inputs, weights, and activations in neural networks are all tensors.
  • Computer Vision: Images and feature maps are 3D or 4D tensors.
  • NLP: Sequences of word embeddings form 2D/3D tensors.
  • Reinforcement Learning: States, actions, and rewards can be represented as tensors for batch training.

10.8 Exercises

  1. Create a 4D tensor representing a batch of 5 grayscale images of size 28x28.
  2. Perform element-wise multiplication of two tensors of shape (3,3,3).
  3. Reshape a tensor of shape (2,3,4) into (3,2,4) and verify the contents remain the same.
  4. Sum along different dimensions and interpret the results.
Answers / Hints
  1. Use torch.randn(5,1,28,28) for grayscale batch.
  2. Use * operator for element-wise multiplication.
  3. Use tensor.view(new_shape) or tensor.reshape(new_shape).
  4. Use tensor.sum(dim=?) for different axes.

10.9 Practice Projects / Mini Tasks

  • Load the MNIST dataset and convert images into 4D tensors for training a CNN.
  • Create a batch of random sequences and compute mean along time dimension.
  • Implement a mini fully-connected neural network using PyTorch and verify tensor shapes at each layer.

10.10 Further Reading & Videos

  • PyTorch documentation — torch.Tensor
  • TensorFlow documentation — tf.Tensor
  • Deep Learning Book — Chapters on data representation and tensor operations
  • 3Blue1Brown — visual intuition for high-dimensional arrays

Next chapter: Tensor Operations & Broadcasting — explore advanced operations, reshaping, and broadcasting rules in deep learning frameworks.

Post a Comment

Feel free to ask your query...
Cookie Consent
We serve cookies on this site to analyze traffic, remember your preferences, and optimize your experience.
Oops!
It seems there is something wrong with your internet connection. Please connect to the internet and start browsing again.
AdBlock Detected!
We have detected that you are using adblocking plugin in your browser.
The revenue we earn by the advertisements is used to manage this website, we request you to whitelist our website in your adblocking plugin.
Site is Blocked
Sorry! This site is not available in your country.