.. note::
    :class: sphx-glr-download-link-note

    Click :ref:`here <sphx_glr_download_beginner_basics_tensor_tutorial.py>` to download the full example code
.. rst-class:: sphx-glr-example-title

.. _sphx_glr_beginner_basics_tensor_tutorial.py:


`Learn the Basics <intro.html>`_ ||
`Quickstart <quickstart_tutorial.html>`_ || 
**Tensors** || 
`Datasets & DataLoaders <data_tutorial.html>`_ ||
`Transforms <transforms_tutorial.html>`_ ||
`Build Model <buildmodel_tutorial.html>`_ ||
`Autograd <autograd_tutorial.html>`_ ||
`Optimization <optimization_tutorial.html>`_ ||
`Save & Load Model <saveloadrun_tutorial.html>`_

Tensors 
==========================

Tensors are a specialized data structure that are very similar to arrays and matrices. 
In PyTorch, we use tensors to encode the inputs and outputs of a model, as well as the model’s parameters.

Tensors are similar to `NumPy’s <https://numpy.org/>`_ ndarrays, except that tensors can run on GPUs or other hardware accelerators. In fact, tensors and
NumPy arrays can often share the same underlying memory, eliminating the need to copy data (see :ref:`bridge-to-np-label`). Tensors 
are also optimized for automatic differentiation (we'll see more about that later in the `Autograd <autograd_tutorial.html>`__ 
section). If you’re familiar with ndarrays, you’ll be right at home with the Tensor API. If not, follow along!


.. code-block:: default


    import torch
    import numpy as np








Initializing a Tensor
~~~~~~~~~~~~~~~~~~~~~

Tensors can be initialized in various ways. Take a look at the following examples:

**Directly from data**

Tensors can be created directly from data. The data type is automatically inferred.


.. code-block:: default


    data = [[1, 2],[3, 4]]
    x_data = torch.tensor(data)







**From a NumPy array**

Tensors can be created from NumPy arrays (and vice versa - see :ref:`bridge-to-np-label`).


.. code-block:: default

    np_array = np.array(data)
    x_np = torch.from_numpy(np_array)








**From another tensor:**

The new tensor retains the properties (shape, datatype) of the argument tensor, unless explicitly overridden.


.. code-block:: default


    x_ones = torch.ones_like(x_data) # retains the properties of x_data
    print(f"Ones Tensor: \n {x_ones} \n")

    x_rand = torch.rand_like(x_data, dtype=torch.float) # overrides the datatype of x_data
    print(f"Random Tensor: \n {x_rand} \n")






.. rst-class:: sphx-glr-script-out

 Out:

 .. code-block:: none

    Ones Tensor: 
     tensor([[1, 1],
            [1, 1]]) 

    Random Tensor: 
     tensor([[0.4223, 0.1719],
            [0.3184, 0.2631]])


**With random or constant values:**

``shape`` is a tuple of tensor dimensions. In the functions below, it determines the dimensionality of the output tensor.


.. code-block:: default


    shape = (2,3,)
    rand_tensor = torch.rand(shape)
    ones_tensor = torch.ones(shape)
    zeros_tensor = torch.zeros(shape)

    print(f"Random Tensor: \n {rand_tensor} \n")
    print(f"Ones Tensor: \n {ones_tensor} \n")
    print(f"Zeros Tensor: \n {zeros_tensor}")







.. rst-class:: sphx-glr-script-out

 Out:

 .. code-block:: none

    Random Tensor: 
     tensor([[0.1602, 0.6000, 0.4126],
            [0.5558, 0.0912, 0.3004]]) 

    Ones Tensor: 
     tensor([[1., 1., 1.],
            [1., 1., 1.]]) 

    Zeros Tensor: 
     tensor([[0., 0., 0.],
            [0., 0., 0.]])


--------------


Attributes of a Tensor
~~~~~~~~~~~~~~~~~

Tensor attributes describe their shape, datatype, and the device on which they are stored.


.. code-block:: default


    tensor = torch.rand(3,4)

    print(f"Shape of tensor: {tensor.shape}")
    print(f"Datatype of tensor: {tensor.dtype}")
    print(f"Device tensor is stored on: {tensor.device}")






.. rst-class:: sphx-glr-script-out

 Out:

 .. code-block:: none

    Shape of tensor: torch.Size([3, 4])
    Datatype of tensor: torch.float32
    Device tensor is stored on: cpu


--------------


Operations on Tensors
~~~~~~~~~~~~~~~~~

Over 100 tensor operations, including arithmetic, linear algebra, matrix manipulation (transposing, 
indexing, slicing), sampling and more are
comprehensively described `here <https://pytorch.org/docs/stable/torch.html>`__.

Each of these operations can be run on the GPU (at typically higher speeds than on a
CPU). If you’re using Colab, allocate a GPU by going to Runtime > Change runtime type > GPU.

By default, tensors are created on the CPU. We need to explicitly move tensors to the GPU using 
``.to`` method (after checking for GPU availability). Keep in mind that copying large tensors
across devices can be expensive in terms of time and memory!


.. code-block:: default


    # We move our tensor to the GPU if available
    if torch.cuda.is_available():
      tensor = tensor.to('cuda')








Try out some of the operations from the list.
If you're familiar with the NumPy API, you'll find the Tensor API a breeze to use.


**Standard numpy-like indexing and slicing:**


.. code-block:: default


    tensor = torch.ones(4, 4)
    print('First row: ',tensor[0])
    print('First column: ', tensor[:, 0])
    print('Last column:', tensor[..., -1])
    tensor[:,1] = 0
    print(tensor)





.. rst-class:: sphx-glr-script-out

 Out:

 .. code-block:: none

    First row:  tensor([1., 1., 1., 1.])
    First column:  tensor([1., 1., 1., 1.])
    Last column: tensor([1., 1., 1., 1.])
    tensor([[1., 0., 1., 1.],
            [1., 0., 1., 1.],
            [1., 0., 1., 1.],
            [1., 0., 1., 1.]])


**Joining tensors** You can use ``torch.cat`` to concatenate a sequence of tensors along a given dimension.
See also `torch.stack <https://pytorch.org/docs/stable/generated/torch.stack.html>`__,
another tensor joining op that is subtly different from ``torch.cat``.


.. code-block:: default

    t1 = torch.cat([tensor, tensor, tensor], dim=1)
    print(t1)






.. rst-class:: sphx-glr-script-out

 Out:

 .. code-block:: none

    tensor([[1., 0., 1., 1., 1., 0., 1., 1., 1., 0., 1., 1.],
            [1., 0., 1., 1., 1., 0., 1., 1., 1., 0., 1., 1.],
            [1., 0., 1., 1., 1., 0., 1., 1., 1., 0., 1., 1.],
            [1., 0., 1., 1., 1., 0., 1., 1., 1., 0., 1., 1.]])


**Arithmetic operations**


.. code-block:: default


    # This computes the matrix multiplication between two tensors. y1, y2, y3 will have the same value
    y1 = tensor @ tensor.T
    y2 = tensor.matmul(tensor.T)

    y3 = torch.rand_like(tensor)
    torch.matmul(tensor, tensor.T, out=y3)


    # This computes the element-wise product. z1, z2, z3 will have the same value
    z1 = tensor * tensor
    z2 = tensor.mul(tensor)

    z3 = torch.rand_like(tensor)
    torch.mul(tensor, tensor, out=z3)








**Single-element tensors** If you have a one-element tensor, for example by aggregating all
values of a tensor into one value, you can convert it to a Python
numerical value using ``item()``:


.. code-block:: default


    agg = tensor.sum()
    agg_item = agg.item()  
    print(agg_item, type(agg_item))






.. rst-class:: sphx-glr-script-out

 Out:

 .. code-block:: none

    12.0 <class 'float'>


**In-place operations**
Operations that store the result into the operand are called in-place. They are denoted by a ``_`` suffix. 
For example: ``x.copy_(y)``, ``x.t_()``, will change ``x``.


.. code-block:: default


    print(tensor, "\n")
    tensor.add_(5)
    print(tensor)





.. rst-class:: sphx-glr-script-out

 Out:

 .. code-block:: none

    tensor([[1., 0., 1., 1.],
            [1., 0., 1., 1.],
            [1., 0., 1., 1.],
            [1., 0., 1., 1.]]) 

    tensor([[6., 5., 6., 6.],
            [6., 5., 6., 6.],
            [6., 5., 6., 6.],
            [6., 5., 6., 6.]])


.. note::
     In-place operations save some memory, but can be problematic when computing derivatives because of an immediate loss
     of history. Hence, their use is discouraged.

--------------


.. _bridge-to-np-label:

Bridge with NumPy
~~~~~~~~~~~~~~~~~
Tensors on the CPU and NumPy arrays can share their underlying memory
locations, and changing one will change	the other.

Tensor to NumPy array
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^


.. code-block:: default

    t = torch.ones(5)
    print(f"t: {t}")
    n = t.numpy()
    print(f"n: {n}")





.. rst-class:: sphx-glr-script-out

 Out:

 .. code-block:: none

    t: tensor([1., 1., 1., 1., 1.])
    n: [1. 1. 1. 1. 1.]


A change in the tensor reflects in the NumPy array.


.. code-block:: default


    t.add_(1)
    print(f"t: {t}")
    print(f"n: {n}")






.. rst-class:: sphx-glr-script-out

 Out:

 .. code-block:: none

    t: tensor([2., 2., 2., 2., 2.])
    n: [2. 2. 2. 2. 2.]


NumPy array to Tensor
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^


.. code-block:: default

    n = np.ones(5)
    t = torch.from_numpy(n)







Changes in the NumPy array reflects in the tensor.


.. code-block:: default

    np.add(n, 1, out=n)
    print(f"t: {t}")
    print(f"n: {n}")




.. rst-class:: sphx-glr-script-out

 Out:

 .. code-block:: none

    t: tensor([2., 2., 2., 2., 2.], dtype=torch.float64)
    n: [2. 2. 2. 2. 2.]



.. rst-class:: sphx-glr-timing

   **Total running time of the script:** ( 0 minutes  6.125 seconds)


.. _sphx_glr_download_beginner_basics_tensor_tutorial.py:


.. only :: html

 .. container:: sphx-glr-footer
    :class: sphx-glr-footer-example



  .. container:: sphx-glr-download

     :download:`Download Python source code: tensor_tutorial.py <tensor_tutorial.py>`



  .. container:: sphx-glr-download

     :download:`Download Jupyter notebook: tensor_tutorial.ipynb <tensor_tutorial.ipynb>`


.. only:: html

 .. rst-class:: sphx-glr-signature

    `Gallery generated by Sphinx-Gallery <https://sphinx-gallery.readthedocs.io>`_