.. note:: :class: sphx-glr-download-link-note Click :ref:`here ` to download the full example code .. rst-class:: sphx-glr-example-title .. _sphx_glr_beginner_blitz_tensor_tutorial.py: What is PyTorch? ================ It’s a Python based scientific computing package targeted at two sets of audiences: - A replacement for NumPy to use the power of GPUs - a deep learning research platform that provides maximum flexibility and speed Getting Started --------------- Tensors ^^^^^^^ Tensors are similar to NumPy’s ndarrays, with the addition being that Tensors can also be used on a GPU to accelerate computing. .. code-block:: python from __future__ import print_function import torch Construct a 5x3 matrix, uninitialized: .. code-block:: python x = torch.empty(5, 3) print(x) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none tensor([[ 0.0000, -0.0000, -251817.4219], [-8592030720.0000, 0.0000, 0.0000], [ -0.0000, 0.0000, -0.0000], [ 0.0000, 0.0000, 0.0000], [ 0.0000, -0.0000, -246207.2344]]) Construct a randomly initialized matrix: .. code-block:: python x = torch.rand(5, 3) print(x) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none tensor([[0.9704, 0.3705, 0.8422], [0.8487, 0.7496, 0.8310], [0.0902, 0.0292, 0.7219], [0.9508, 0.8433, 0.9557], [0.7385, 0.3457, 0.5150]]) Construct a matrix filled zeros and of dtype long: .. code-block:: python x = torch.zeros(5, 3, dtype=torch.long) print(x) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none tensor([[0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0]]) Construct a tensor directly from data: .. code-block:: python x = torch.tensor([5.5, 3]) print(x) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none tensor([5.5000, 3.0000]) or create a tensor based on an existing tensor. These methods will reuse properties of the input tensor, e.g. dtype, unless new values are provided by user .. code-block:: python x = x.new_ones(5, 3, dtype=torch.double) # new_* methods take in sizes print(x) x = torch.randn_like(x, dtype=torch.float) # override dtype! print(x) # result has the same size .. rst-class:: sphx-glr-script-out Out: .. code-block:: none tensor([[1., 1., 1.], [1., 1., 1.], [1., 1., 1.], [1., 1., 1.], [1., 1., 1.]], dtype=torch.float64) tensor([[-2.3413, -1.1431, 0.1331], [-1.9711, -0.4251, -0.5430], [ 0.1863, 0.1646, 0.9951], [ 0.5716, 0.0824, -0.7097], [-0.5579, 0.6171, 0.8105]]) Get its size: .. code-block:: python print(x.size()) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none torch.Size([5, 3]) .. note:: ``torch.Size`` is in fact a tuple, so it supports all tuple operations. Operations ^^^^^^^^^^ There are multiple syntaxes for operations. In the following example, we will take a look at the addition operation. Addition: syntax 1 .. code-block:: python y = torch.rand(5, 3) print(x + y) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none tensor([[-1.4486, -0.1658, 0.4736], [-1.8568, 0.4701, -0.0302], [ 0.3034, 1.0266, 1.8048], [ 1.1426, 0.8082, -0.1959], [-0.4536, 0.6279, 1.6068]]) Addition: syntax 2 .. code-block:: python print(torch.add(x, y)) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none tensor([[-1.4486, -0.1658, 0.4736], [-1.8568, 0.4701, -0.0302], [ 0.3034, 1.0266, 1.8048], [ 1.1426, 0.8082, -0.1959], [-0.4536, 0.6279, 1.6068]]) Addition: providing an output tensor as argument .. code-block:: python result = torch.empty(5, 3) torch.add(x, y, out=result) print(result) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none tensor([[-1.4486, -0.1658, 0.4736], [-1.8568, 0.4701, -0.0302], [ 0.3034, 1.0266, 1.8048], [ 1.1426, 0.8082, -0.1959], [-0.4536, 0.6279, 1.6068]]) Addition: in-place .. code-block:: python # adds x to y y.add_(x) print(y) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none tensor([[-1.4486, -0.1658, 0.4736], [-1.8568, 0.4701, -0.0302], [ 0.3034, 1.0266, 1.8048], [ 1.1426, 0.8082, -0.1959], [-0.4536, 0.6279, 1.6068]]) .. note:: Any operation that mutates a tensor in-place is post-fixed with an ``_``. For example: ``x.copy_(y)``, ``x.t_()``, will change ``x``. You can use standard NumPy-like indexing with all bells and whistles! .. code-block:: python print(x[:, 1]) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none tensor([-1.1431, -0.4251, 0.1646, 0.0824, 0.6171]) Resizing: If you want to resize/reshape tensor, you can use ``torch.view``: .. code-block:: python x = torch.randn(4, 4) y = x.view(16) z = x.view(-1, 8) # the size -1 is inferred from other dimensions print(x.size(), y.size(), z.size()) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none torch.Size([4, 4]) torch.Size([16]) torch.Size([2, 8]) If you have a one element tensor, use ``.item()`` to get the value as a Python number .. code-block:: python x = torch.randn(1) print(x) print(x.item()) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none tensor([1.6580]) 1.6579647064208984 **Read later:** 100+ Tensor operations, including transposing, indexing, slicing, mathematical operations, linear algebra, random numbers, etc., are described `here `_. NumPy Bridge ------------ Converting a Torch Tensor to a NumPy array and vice versa is a breeze. The Torch Tensor and NumPy array will share their underlying memory locations, and changing one will change the other. Converting a Torch Tensor to a NumPy Array ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. code-block:: python a = torch.ones(5) print(a) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none tensor([1., 1., 1., 1., 1.]) .. code-block:: python b = a.numpy() print(b) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none [1. 1. 1. 1. 1.] See how the numpy array changed in value. .. code-block:: python a.add_(1) print(a) print(b) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none tensor([2., 2., 2., 2., 2.]) [2. 2. 2. 2. 2.] Converting NumPy Array to Torch Tensor ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ See how changing the np array changed the Torch Tensor automatically .. code-block:: python import numpy as np a = np.ones(5) b = torch.from_numpy(a) np.add(a, 1, out=a) print(a) print(b) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none [2. 2. 2. 2. 2.] tensor([2., 2., 2., 2., 2.], dtype=torch.float64) All the Tensors on the CPU except a CharTensor support converting to NumPy and back. CUDA Tensors ------------ Tensors can be moved onto any device using the ``.to`` method. .. code-block:: python # let us run this cell only if CUDA is available # We will use ``torch.device`` objects to move tensors in and out of GPU if torch.cuda.is_available(): device = torch.device("cuda") # a CUDA device object y = torch.ones_like(x, device=device) # directly create a tensor on GPU x = x.to(device) # or just use strings ``.to("cuda")`` z = x + y print(z) print(z.to("cpu", torch.double)) # ``.to`` can also change dtype together! **Total running time of the script:** ( 0 minutes 0.010 seconds) .. _sphx_glr_download_beginner_blitz_tensor_tutorial.py: .. only :: html .. container:: sphx-glr-footer :class: sphx-glr-footer-example .. container:: sphx-glr-download :download:`Download Python source code: tensor_tutorial.py ` .. container:: sphx-glr-download :download:`Download Jupyter notebook: tensor_tutorial.ipynb ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_