본문 바로가기

ML&DL/Dive into Deep Learning

[3.2.6]Dive into Deep Learning : exercise answers

728x90
반응형
 

3.2. Object-Oriented Design for Implementation — Dive into Deep Learning 1.0.3 documentation

 

d2l.ai

[1]

So there are a bunch of classes and methods in the torch.py file and I will go through only the classes that are shown in 3.2.

 

Before we get into the library, I found something interesting which is the way torch let users to implement their own functions for additional perposes. For example, we will often see codes that allows us as follows

from d2l import torch as d2l

class A():
    def func(self):
        raise NotImplemented

    def func(self, var1, var2):
        print(var1 + var2)

a = A()
a.func(2, 3) # result : 5

@d2l.add_to_class(A)
def func(self, var1, var2):
    print(var1 * var2)

a.func(2, 3) # result : 6

 

Thus if we don't implement the method that is defined as 'raise NotImplemented', the default method will be called and if we implement the method by our own taste, the newly written method will be called.

 

add_to_class(Class)

This is used for adding methods to the give Class without any direct implementation to the Class.

class Hyperparameters

save_hyperparameters(self, ignore=[])

Saves function arguments into class attributes that are not in ignore. Thus there is no need to do

self.(arg)=(arg) ... reapeatedly.

 

 

class ProgressBoard(d2l.HyperParameters)

draw(self, x, y, label, every_n=1)

Draws a graph according to y given x. Label is to distinguish other functions being drawn and it draws the mean of every_n points. Thus the bigger the every_n, the cruder the graph will be.

 

class Module(nn.Module, d2l.HyperParameters)

loss(self, y_hat, y)

Must be implemented by the user if used since there are only 'raise NotImplemented'. y_hat is the predicted result and y is the given true result by the data. Since there are lots of loss functions, the user must implement this method by one's need.

 

forward(self, X)

First checkes if the neural network is defined and if so, returns the calculated forward algorithm.

 

plot(self, key, value, train)

Draws the result using draw method.

 

training_step(self, batch)

Calculates and plots the loss only once.

 

validatoin_step(self, batch)

Similar with the training_step but doesn't return anything. Just plots the validation results.

 

One thing to note is that in the code, there is self(*batch[:-1]) which doesn't make a lot of sense. This is actually a line that calls the forward method. If we have an instance 'a' we can call the forward method by only writing a(X) thanks to the inbuilt __call__ method.

 

configure_optimizers(self)

returns the optimization method. The default is torch.optim.SGD, thus if not implemented by the user the optimizer will be SGD(Minibatch stochastic gradient descent)

 

class DataModule(d2l.HyperParameters)

get_dataloader(self, train)

Must be implemented by the user. It usually returns "torch.utils.data.DataLoader" sort of data. The returned data itself is distinguished as training data(train=True) or validation data(train=False).

 

train_dataloader(self), val_dataloader(self)

Described above.

 

class Trainer(d2l.HyperParameters)

 

prepare_data(self, data)

Prepares training, validation data and number of batches there are in training and validation.

 

prepare_model(self, model)

Assigns the 'trainer' attribute of the model to be the current instance of the class. Also makes the model of the instance be the model(self.model=model).

 

fit(self, model, data)

Prepares the data and model that would be used, and sets the optimizers accordingly to the user.

 


[2]

save_hyperparameter automatically saves the parameters given. If we erase if, parameter a, b, c won't be added as a attribute of B.

728x90
반응형