Introduction to Tensors in Neural Networks

Less than 500 views Posted On Aug. 24, 2020

In Deep Learning, we store our data in the form of Tensors.

A Tensor is a multidimensional NumPy array in which we store our input data. It is a container for data - almost always numerical data. So, it’s a container for numbers.

In general, all current machine-learning systems use tensors as their basic data structure.

For example, we have matrices which are 2D tensors. Tensors can also be defined as a generalization of matrices to an arbitrary number of dimensions.

Different Types of Tensors -

Scalars (0D Tensors)

A tensor that contains only one number is called a scalar (or scalar-tensor, or 0-dimensional tensor, or 0D tensor).

In Numpy, a float32 or float64 number is a scalar-tensor (or scalar array). You can display the number of axes of a Numpy tensor via the ndim attribute; a scalar-tensor has 0 axes (ndim == 0).

The number of axes of a tensor is also called it's rank.

>>> import numpy as np
>>> x = np.array(15)
>>> x
>>> x.ndim

Vectors (1D tensors)

An array of numbers is called a vector, or 1D tensor. A 1D tensor is said to have exactly one axis. 

>>> x = np.array([15, 2, 7, 4])
>>> x
array([15, 2, 7, 4])
>>> x.ndim

This vector has four entries and so is called a 4-dimensional vector.

Note: Don’t confuse a 4D vector with a 4D tensor! A 4D vector has only one axis and has five dimensions along its axis, whereas a 4D tensor has five axes (and may have any number of dimensions along each axis).

Matrices (2D tensors)

An array of vectors is a matrix or a 2D tensor. A matrix has two axes (often referred to as rows and columns). You can visually interpret a matrix as a rectangular grid of numbers.

>>> x = np.array([[5, 78, 2, 34, 0],
                  [6, 79, 3, 35, 1],
                  [7, 80, 4, 36, 2]])
>>> x.ndim

The entries from the first axis are called the rows, and the entries from the second axis are called the columns.

Share this tutorial with someone who needs it

What are your thoughts?