top of page

Deep Neural Network & Hyperparameters Optimization

Keywords: deep learning, hyperparameters optimization, stock market prediction.

Source

Many artificial intelligence techniques have been employed to forecast stock market prices. According to the literature, the commonly used one is the Artificial Neural Network (ANN). For example, Guresen et al.[1] use Dynamic Artificial Neural Network (DANN) and Multi-Layer Perceptron (MLP) Model for NASDAQ Stock Index prediction. Ha et al.[2] combined reinforcement learning algorithm and cointegration paired trading strategy to solve the problem of portfolio selection. Vanstone et al. [3] design a MLP-based trading system to detect trading signals for the Australian stock market.


Deep neural network

More recently, deep neural network (DNN) has emerged as an improved technique over conventional neural networks for a variety of applications like speech recognition, computer vision, natural language processing, and has showed notable results. DNN has also been used for the prediction of stock prices using both textual news and numerical data and it has become a promising method for modelling complex stock movements by capturing non-linear trends and reducing noise without any assumption of a predetermined underlying structure. At the same time, the user is left with the difficult task of selecting a number of hyperparameters, that have a direct impact on the performance of the resulting models.


Hyperparameter optimization (HPO)

For the deep learning models, hyperparameters are the parameters of the model that will control the training process and therefore defined before this stage. Based on previous research, studies showed that a few parameters have a bigger impact on the results than the others, such as:

- Number of neurons (nodes) of hidden layer. The deep learning models usually have a multiple number of hidden layers (more than three) and each hidden layer have several nodes.

- Activation function of a hidden layer: this function runs on the nodes of the network and it maps the inputs of the each node to its corresponding output. Each node has an input, output, weight, and a processing unit. Tanh, ReLU, and LeakyReLU are the activation functions usually used.

- Dropout Rate (number between 0 and 1): Dropout technique randomly delete some nodes to train different neural network architectures on different cycles. This technique is also known as a regularisation method that prevents the model from overfitting.

- Optimizers: are used to adjust the parameters (weights for instance) of the model in order to maximize the loss function, which evaluates the model performance.

- Batch Size: refers to the number of training examples utilized in one epoch.

- Learning Rate : is used to scale the parameters of the network after each optimization step while moving toward a minimum of a loss function.


Conclusion

In recent years, hyperparameter optimization has become an increasingly important issue in the field of machine learning for the development of more accurate forecasting models. As the number of hyperparameters grows, the selection of their value becomes increasingly important and complex. Sang Lee (link) explored the potential of HPO in modeling stock returns using a DNN and he found out that the model using technical indicators and dropout regularization significantly outperforms three other models.


Related work

Hyperparameter Optimization for Forecasting Stock Returns, Sang Lee - link


76 views
bottom of page