Be the first user to complete this post

  • 0
Add to List
Beginner

7. What is Tensor

A tensor is a multi-dimensional array or data structure that is a generalization of scalars, vectors, matrices, and higher-dimensional arrays. Tensors are fundamental data structures in deep learning and are used extensively in frameworks like PyTorch and TensorFlow.

Types of Tensors

  • Scalar (0D tensor): A single number.
    Example: 7
  • Vector (1D tensor): An array of numbers.
    Example: [1, 2, 3]
  • Matrix (2D tensor): A 2-dimensional array of numbers.
    Example:
    [[1, 2, 3], [4, 5, 6]]
  • Higher-Dimensional Tensors (3D and above): Arrays with three or more dimensions.
    Example: A 3D tensor might look like a stack of matrices, represented as
    [[[1, 2], [3, 4]], [[5, 6], [7, 8]]]

Tensors in PyTorch

In PyTorch, tensors are central to almost all operations. They are similar to NumPy arrays but can also run on GPUs, which accelerates their computation.

Example of Creating and Manipulating Tensors in PyTorch:

 

Key Properties of Tensors

  • Shape: Describes the dimensions of the tensor (e.g., a 2x3 matrix has a shape of (2, 3)).
  • Data Type: The type of data contained in the tensor (e.g., float32, int64).
  • Device: Indicates whether the tensor is stored on a CPU or GPU.

Operations on Tensors

Tensors support a variety of operations, including arithmetic operations, linear algebra operations, and more. Here are a few examples:

 

Why Tensors?

Tensors are designed to efficiently handle and store the data required for deep learning models. They are capable of performing large-scale computations in an optimized manner, leveraging hardware accelerators like GPUs for faster computation, which is crucial for training complex neural networks.

In summary, tensors are a core data structure in machine learning frameworks like PyTorch, enabling efficient computation and manipulation of data for training models.