รหัสดีโอไอ 10.14457/TU.the.2020.80
Title Predictive machine learning models for packing media preparation in canned pineapple production process
Creator Kwanluck Thiwa-anont
Contributor Jirachai Buddhakulsomsiri, Advisor
Publisher Thammasat University
Publication Year 2020
Keyword Artificial neural network ,Support vector machine ,Deep belief network ,Machine learning ,Response surface methodology ,Grid search ,Hyperparameter fine tuning
Abstract This study involves implementing machine learning algorithms (ML algorithms) to predict a quality characteristic of a food product, canned pineapple. In this production process, packing medium (PM) preparation is an important process. To reduce production time and cost, PM needs to be prepared in advance. Currently, laboratory technicians have to measure the sugar content in the raw material (pineapple). The total soluble solids (TSS) of the raw material (RM) or the ration of sugar content is called a degree of Brix. To improve the PM preparation process, this study aims at developing prediction models of the degree of Brix for an industrial user, which is the largest producer and distributor of the canned fruit industry in Thailand. With accurate prediction, the PM preparation process time and cost can be significantly reduced. For the purpose of model development, the industrial user has collected some data about the source of incoming RMs so that the prediction can be made in advance before RM arrivals. Three prediction models featuring ML algorithms, which are artificial neural network (ANN), support vector machine (SVM), and deep belief network (DBN), are constructed. To effectively use these ML algorithms, one important task is to fine tune the hyperparameters of the ML algorithms. The fine-tuning process of ML algorithms is an open and challenging research problem. One of the most widely used hyperparameter fine tuning method is grid search (GS) because of its simplicity. With GS, the range (i.e. lower bound and upper bound) and step size of each hyperparameter are defined. Then, all possible combinations of hyperparameter values are generated and tested to determine the best set of hyperparameters. The number of experimental runs is a multiplication of the number of factors and their levels. Therefore, GS can be highly inefficient in terms of computational resource and time. Therefore, the objective of this study is to propose another method from the principle of designs of experiments (DOE), called response surface method (RSM) for hyperparameter fine tuning. The goal is to reduce the number of experimental runs, and therefore, computational times to find the suitable hyperparameter settings for each ML algorithm, while being able to maintain the performance of ML algorithms.A computational study is performed to compare GS and RSM for hyperparameter tuning of the three ML algorithms. A 10-folds cross validation of the datasets is also performed to ensure that the result is robust to the randomness in data partitioning to separate data into three sets: train, validation, and test. The key performance measure of ML algorithms is the mean absolute error (MAE) of the validation dataset. With GS, the hyperparameter setting that is associated with the minimum MAE of validation dataset is selected, whereas with RSM the hyperparameter settings are obtained from analyzing the response surface model of the validation MAE. Confirmation runs are then performed to verify the ML algorithm performance at the respective hyperparameter settings. In addition, reliability of the hyperparameter settings obtained from GS and RSM is measured in terms of the proportion that the 95% prediction intervals of MAE validation from confirmation runs contain the original MAE validation. Comparison results from statistical analysis indicate the followings findings. (1) For ANN: the hyperparameters from GS and RSM give statistically the same prediction performance in nine out of 10 data folds. In other words, only one data fold where GS performs statistically better than RSM, a difference of 0.17 MAE (out of the range of 7 to 18 degrees Brix). However, RSM has significantly a smaller number of experimental runs, i.e. GS has 44,100 runs versus 976 runs with RSM, a 97.79% savings on the number of runs.(2) For DBN: GS gives hyperparameter settings that are statistically better than RSM in three out of 10 data folds. For these three folds, GS performs better by 0.078 MAE on the average. In contrast, RSM uses 1,408 experimental runs to reach its parameter settings, while GS needs 7,290 runs, that is a savings of 80.69% in the number of runs by RSM over that of GS.(3) For SVM, the average validation MAE from GS is 0.040 MAE in degree Brix lower than that from the RSM. The number of runs required by GS and RSM is 44,880 and 984, respectively. A savings of 97.81%. (4) GS gives hyperparameter settings that are 80% reliable for both ANN and DBN, whereas the hyperparameter settings from RSM can give 90% and 100% reliability for ANN and DBN, respectively. In conclusion, GS and RSM can give hyperparameter settings that are relatively the same in their prediction performance, while the benefit of using RSM is the savings in the number of experimental runs and a fewer execution time.
ดิจิตอลไฟล์ Digital File #1

บรรณานุกรม

Kwanluck Thiwa-anont และผู้แต่งคนอื่นๆ. (2020) Predictive machine learning models for packing media preparation in canned pineapple production process. Thammasat University:ม.ป.ท.
Kwanluck Thiwa-anont และผู้แต่งคนอื่นๆ. 2020. Predictive machine learning models for packing media preparation in canned pineapple production process. ม.ป.ท.:Thammasat University;
Kwanluck Thiwa-anont และผู้แต่งคนอื่นๆ. Predictive machine learning models for packing media preparation in canned pineapple production process. ม.ป.ท.:Thammasat University, 2020. Print.