PyTorch Basics Part 2

  • by

This is the Second part of the PyTorch basics. If you have not looked at 1st part i suggest to check it out PyTorch Basics part 1

In the first story you looked at how to create a conda environment, install pytorch and Jupyter lab, finally various ways of creating a tensor. In this part we will first look at Operations with Tensor ,Indexing, Slicing and Joining Tensors, CUDA Tensors with GPU.

Operations with/on Tensor

As the title suggests we can perform mathematical operations like +,-,*,/ not just with the operators we can also use specific functions like add(), sub() or subtract(), mul() or multiply() finally div() or divide(). The below example is of substracting a tensor y from tensor x with all three methods.

As you can see the result of all the methods is same. Describe is the user defined method which i discussed in part 1. Also i imported torch as t.

All these mathematical operations requires two inputs or two tensors or same tensor used twice. But there are few methods which we can apply on a single tensor.

tensor.arange is a method which will generate a 1-D tensor(row tensor) starting from 0 till the length you specified as a parameter.

view is a method to transform a 1-D tensor into 2-D. To do that the length of 1-D tensor should match the multiplication of rows and columns in 2-D tensor else it will throw an error.

Trending Bot Articles:

1. 3 Tips for your Voice and Chatbot Program from Gartner’s Customer Service Hype Cycle 2020

2. Deploying Watson Assistant Web Chat in Salesforce Lightning Console

3. Are Chatbots Vulnerable? Best Practices to Ensure Chatbots Security

4. Your Path to AI — An IBM Developer Series

torch.sum is a method which will add up all the values in a particular dimension.

torch.transpose is another method. Well as the name suggests it will transpose the tensor. Along with the tensor we have to pass the dimensions we need to transpose for 2-D its 0 and 1 order does not matter, but for 3-D based on order the transpose varies the range of dimensions which we can specify is in range[-2,1]. In this page we will just look at 2-D for 3-D i will make another story about understanding dimensions in pytroch.

There are other useful methods in torch library we will look at them in the last part miscellaneous.

Indexing, Slicing and Joining

Indexing is a simple concept we can access elements in the tensor with their positions in the tensors those positions are also called indexes.

The above is for getting a single element. But we can also access particular rows/columns eg: row0 and row2. Imp: for indexes we need to use Long tensors

Ind has value 0 and 2 dim =0 means rows. this extracts rows 0 and 2. If dim was 1 it will extract column 0 and column 2.

instead of getting whole row or column if you want few specific elements like (0,0), (2,1),(1,1) and (0,2) we can also get them like this

all row indexes as one tensor and column indexes as another tensor.

Slicing is extracting a sub tensor from main tensor by specifying the row index and column index till which you want in your sub tensor. Eg: i have a tensor of 4×4 elements. i want sub tensor from row 0 to 1and column 0 to 2.

And if i want elements in row 1 to 2 with column as 2 then

Joining requires two tensors which we can join/concat in rows or columns torch.cat() is the method used.

dim = 0 means increase no of rows( concat as rows mode) , dim =1 increase no columns (concat as columns mode).

CUDA Tesnors

So Far all the operations we done are running on CPU. For more sophisticated tasks where we utilize concepts like LSTM, GRU etc. we might need help of GPU to make the compilation run faster. PyTorch provides CUDA tensor objects which help in accessing GPU for the code run.

We can test GPU availability with torch.cuda.is_available() and if GPU are available we can assign CUDA to device using torch.device() and we can run code on that device.

lets try running simple creation of tensor in GPU and CPU lets see the result it shows.

for GPU mode it shows device as cuda specifying that it ran on GPU. In this example we don’t find much difference but when we reach the topic of using neural nets in pyTorch we will see difference in time of run using GPU.

Next story is on Miscellaneous methods in PyTorch. Miscellaneous but useful.

=====Thanks for reading till the End.========

Don’t forget to give us your 👏 !

https://medium.com/media/7078d8ad19192c4c53d3bf199468e4ab/href


PyTorch Basics Part 2 was originally published in Chatbots Life on Medium, where people are continuing the conversation by highlighting and responding to this story.

Leave a Reply

Your email address will not be published. Required fields are marked *