Abstract
As the main production unit of plywood, the surface defects of veneer seriously affect the quality and grade of plywood. Therefore, a new method for identifying wood defects based on progressive growing generative adversarial network (PGGAN) and the MASK R-CNN model is presented. Poplar veneer was mainly studied in this paper, and its dead knots, live knots, and insect holes were identified and classified. The PGGAN model was used to expand the dataset of wood defect images. A key ideal employed the transfer learning in the base of MASK R-CNN with a classifier layer. Lastly, the trained model was used to identify and classify the veneer defects compared with the back- propagation (BP) neural network, self-organizing map (SOM) neural network, and convolutional neural network (CNN). Experimental results showed that under the same conditions, the algorithm proposed in this paper based on PGGAN and MASK R-CNN and the model obtained through the transfer learning strategy accurately identified the defects of live knots, dead knots, and insect holes. The accuracy of identification was 99.05%, 97.05%, and 99.10%, respectively.
Download PDF
Full Article
Defect Identification Method for Poplar Veneer Based on Progressive Growing Generated Adversarial Network and MASK R-CNN Model
Kai Hu,a Baojin Wang,a,* Yi Shen,b Jieru Guan,a and Yi Cai a
As the main production unit of plywood, the surface defects of veneer seriously affect the quality and grade of plywood. Therefore, a new method for identifying wood defects based on progressive growing generative adversarial network (PGGAN) and the MASK R-CNN model is presented. Poplar veneer was mainly studied in this paper, and its dead knots, live knots, and insect holes were identified and classified. The PGGAN model was used to expand the dataset of wood defect images. A key ideal employed the transfer learning in the base of MASK R-CNN with a classifier layer. Lastly, the trained model was used to identify and classify the veneer defects compared with the back- propagation (BP) neural network, self-organizing map (SOM) neural network, and convolutional neural network (CNN). Experimental results showed that under the same conditions, the algorithm proposed in this paper based on PGGAN and MASK R-CNN and the model obtained through the transfer learning strategy accurately identified the defects of live knots, dead knots, and insect holes. The accuracy of identification was 99.05%, 97.05%, and 99.10%, respectively.
Keywords: Veneer defects; PGGAN; MASK R-CNN; Identification; Transfer learning
Contact information: a: Faculty of Material Science and Engineering, Nanjing Forestry University, Nanjing 210037, China; b: Zhenjiang Zhongfuma Machinery Co., Ltd., Zhenjiang 212127, China;
* Corresponding author: wangbaojincn@139.com.cn
INTRODUCTION
Veneer defect detection and identification plays an important role in the production process of plywood. The traditional detection method of spot defects on the surface of veneer is manual detection, which has high production cost and low efficiency. The demand of automation production is increasingly urgent (Yang et al. 2006). With the development of artificial intelligence technology, deep learning has achieved positive results in image-based classification and target recognition tasks in recent years (An et al. 2017). However, deep neural networks require large amounts of data; the cost of collecting wood images through machines is high (Viguier et al. 2017). Therefore, the dataset of the classification machine is usually too small to train a deep network. In addition, a lot of manual work is required to mark all the collected images, so deep learning is rarely used in the wood industry (Chang et al. 2018). For the detection and classification of wood defects, scholars put forward a variety of methods. Gu et al. (2009) proposed a tree support vector machine (SVM) to classify four types of wood defects using board images. First, the knot image is divided into three different regions, and then the average pseudo-color feature of each region is obtained by applying ordered statistical filtering. Support vector machine classifier trained with 800 wood knot images has achieved good classification results. The performance evaluation showed that the average classification rate of over 400 sub images is 96.5% and the error frequency is 2.25% (Gu et al. 2009). In 2012, Amir proposed three methods combined with the gray co-occurrence matrix method, local binary mode, and statistical moment when extracting the features of defects, and used principal component analysis (PCA) and linear discriminant analysis (LDA) to reduce the dimension of vectors. Subsequently, Support Vector Machine (SVM) and K-Nearest Neighbor (KNN) were used for classification, and satisfactory results were achieved in the classification of wood defects (Mahram et al. 2012). In 2009, Matti Niskanen used a self-organizing map (SOM) neural network to cluster the defects of sawed wood. In terms of feature vectors, based on the application of the lumber color histogram, local binary pattern (LBP) characteristics were supplemented. Its rotation invariance and gray scale invariance feature make the local texture feature extraction more robust (Niskanen and Silven 2003). A classification method combining a genetic algorithm and neural network was proposed for the wood veneer classification by Marco Castellani. The method is effective in identifying single defects on a veneer surface. However, it is difficult to identify two or more kinds of defects on a veneer surface (Castellani and Rowlands 2009). He et al. (2020) proposed a wood defect identification method based on improved DCNN, and the wood of red pine and camphor tree were tested. The overall accuracy reached 99.13%. Urbonas et al. (2019) used Faster R-CNN to identify poplar veneer defects, mainly using ResNet152 neural network model, and obtained the best average accuracy of 80.6%. He et al. (2019) proposed a hybrid full convolution neural network to identify wood types and locations, achieving an overall classification accuracy of 99.14% and a pixel accuracy of 91.3%.
Previous studies focused on image processing, and the accuracy of defect identification was not high and the generalization ability was poor. Deep learning was applied in this paper to identify wood defects. First, the wood defect and the wood defect dataset used in the experiment were introduced, then the defect images were generated by a progressive growing generative adversarial network (PGGAN) to expand the dataset. The ‘Methods’ section explains the deep learning algorithm principle of MASK R-CNN based on transfer learning. Experimental results and performance evaluation of the veneer defect recognition experiment based on PGGAN and MASK R-CNN are presented in the discussion section. Finally, this paper summarizes the research and puts forward the prospect.
EXPERIMENTAL
Materials
Dataset preparation
Defect images were collected using industrial cameras in a Hongrui plywood factory (Xuzhou, China); the collected images were manually annotated for the experiment (Cetiner et al. 2014). However, due to the particularity of the wood industry, the collection and labeling of defects is a burden on financial resources and energy, and it creates an uneven distribution of defect samples and poor diversity, which affect the identification accuracy of subsequent neural network models (Samiappan et al. 2011). Therefore, to improve the diversity of defect images and balance the sample distribution, it is necessary to expand the defect sample database (Yang et al. 2016). The traditional sample expansion methods include rotation, mirror image, translation, random cropping, and affine transformation (Zhang et al. 2015). However, these methods cannot expand the defect details. In this paper, the progressive growing generative adversarial network was adopted to expand the defect details. This method (GAN) is a generative model proposed by Goodfellow et al. in 2014. The GAN is structurally inspired by the two-player game in game theory (that is, the sum of the interests of two players is zero, and the gain of one player is the loss of the other party), and the system consists of a generator and a discriminator (Collier et al. 2018). The generator captures the potential distribution of real data samples and generates new data samples. The optimization process of GAN is a maximum and minimum game problem. The optimization objective is to achieve Nash equilibrium, enable the generator to estimate the distribution of data samples, and make the discriminator unable to distinguish the real image from the generated image. The goal of the whole network makes it impossible for the discriminator to judge. For both true and false samples, the probability of the output results was 0.5. Another purpose is to generate expanded images with different features from the real samples. The optimization objective function can be expressed as:
(1)
As shown in Eq. 1, x is the real sample set, Pdata (x) is the distribution of the real sample set, z is the noise input into the generator G, and Pz (Z) is the probability distribution of noise z. Function consists of two parts, the first part represents the input of real data into discriminator D, and maximizes the output entropy to be one. In the second part, the noise data outputs false images through generator G. In other words, the discriminator D tries to maximize V(D, G), while the generator G tries to minimize it.
The false image is imported into discriminator D, which maximizes it to 0, while the generator tries to reduce the difference between the false image and the real image. In other words, the discriminator distinguishes the picture of formula from the false picture generated by generator G(z), and generator G(z) generates the false picture to cheat the discriminator D(x) until real pictures are obtained (Harer et al. 2018). However, the traditional image generated by GAN cannot achieve high resolution, and the obtained dataset is rather fuzzy.
Fig. 1. The details of the progressive growing generative adversarial network; the images on the far right are the generated sample images (fake sample)
The image required by the dataset in this paper was a 512 × 512 high-resolution image. Systems, such as DCGAN, WGAN, and other generation adversarial networks, have been unable to meet the requirements. To solve this problem, this paper adopted the PGGAN for image synthesis. The core idea of the algorithm is still to generate images for the confrontation between generator G and discriminator D. In addition, the idea of gradual training from low resolution to high resolution is introduced. In this paper, the training process started from low-resolution (4 × 4) images. Next, layers were gradually added to the network to increase the resolution, until the resolution was increased to 512 × 512 to obtain the training results, and then the training process exited the whole program. The training structure of the neural network (Fig. 1) was drawn.
As the number of layers increases, the system learns the texture details of real samples when training for high resolution. In the process of resolution conversion, the transition is completed by adding smooth layers to reduce the impact of sudden resolution conversion (Togo et al. 2019).
Selection of 350 live knot images was carried out randomly, with 350 dead knot images and 350 insect hole images from the original data. Training data were thereby constructed for image generation. In other words, 1050 defect images were allocated as training data. Then the training step was set to 5000. After the PGGAN expanded the samples, 100 live knot samples, 100 dead knot samples, and 100 insect hole samples were obtained, and the preparation example of wood defect samples are shown in Fig. 2.
Fig. 2. Original samples and generated samples
Each defect shows seven images, of which the three on the left are examples of images captured by the camera, and the four on the right are examples of defects generated by PGGAN. It can be seen that each image generated by PGGAN completely inherits the features of the real image, which can be used as a dataset.
Methods
To reduce the steps of making dataset labels and improve the accuracy of image recognition and classification, this paper adopted the MASK R-CNN algorithm based on transfer learning to identify and classify veneer defects (Yang et al. 2019). MASK R-CNN is an object detection algorithm developed from Faster R-CNN. The purpose of object detection and segmentation is to distinguish different objects in the images and draw a boundary box on the specific object. MASK R-CNN not only can draw a bounding box for the target object, but it can also further mark and classify whether the pixels in the bounding box belong to the object, which can be used to identify the object, mark the boundary of the object, and detect the key points (Nguyen et al. 2018). MASK R-CNN was based on Faster R-CNN, and its application was extended to the field of image segmentation. The process of MASK R-CNN is similar to Faster R-CNN, which uses a Region Proposal Network (RPN) to extract features and classify and tighten boundary boxes (Li et al. 2017). Fast R-CNN adopts RoIPool as the feature extraction method, quantifies each RoI region, and solves the size problem of RoI features of different scales by means of maximum convergence (Behr et al. 2019). However, the process leads to the loss of spatial information, which makes the RoI and extraction features of the original image misplaced (He et al. 2019). MASK R-CNN replaces the RoIPool of Faster R-CNN with RoI alignment (RoIAlign) and continuously uses the RoIAlign of the result object area marked by Mask branch (Qin et al. 2017).
Because there were not many wood defect images, 80% of wood defect images could be taken as the training set and 20% as the validation set. Then, the loss function is: . In MASK R-CNN, the most appropriate model was obtained by minimizing the value of the loss function. The trained model was applied for predictive analysis using new data. The loss function of MASK R-CNN was defined as follows:
(2)
The definition of is the same as Faster R-CNN, is defined as:
(3)
(4)
The is the average binary cross-entropy loss:
(5)
When Mask R-CNN completed defect identification and classification, a large amount of picture data was needed for feature learning. However, collecting wood pictures in the wood industry and manually labeling them cost a lot of manpower and material resources. Therefore, an effective method to adopt Mask R-CNN in the current task was to adopt the strategy of transfer learning.
Considering that modern image classification models have millions of parameters, zero-based training requires a lot of parametric adjustment, as well as a large amount of marker training data and high computational bandwidth. Transfer learning mitigates these requirements by adopting a model that has already been trained on a related task by reusing the learned network. In this paper, the ResNet50 architecture over AlexNet and VGG architectures was chosen. This was because the ResNet50 architecture was more compact than AlexNet and reduces the possibility of overfitting while requiring less computer processing power compared to VGG (Krizhevsky et al. 2017).
The model of MASK R-CNN based on the ResNet50 network structure was established. Through experiments, it was found that the algorithm model of pre-training in the common objects in context (COCO) datasets was migrated to the wood defect dataset for further training, which could achieve accuracy. Only the final full connection layer of the model needs to be modified so that the classifier outputs three values, namely live knot, dead knot, and insect hole. A total of 1600 defect images were collected and generated by PGGAN. Among them, 1280 images were used as a training set and 320 images were used as a validation set. Classification accuracy and confusion matrix were set as the output; the experimental results of MASK R-CNN detection classification were analyzed. In this paper, PyCharm (JetBrains, version 2017.1 Community Edition, Prague, Czech Republic) was used to compile, train, and test on a computer (Lenovo, Beijing, China) with 16 GB of memory, and an i7Core processor with a Titan XP graphics card.
Fig. 3. Workflow for identifying wood defects
RESULTS AND DISCUSSION
To test the wood defect detection algorithm based on PGGAN and MASK R-CNN, many experiments were carried out in this work. Wood defect images were 512 × 512 pixels. PGGAN was used to expand the wood defect sample library. The expanded images were used as the training set and transferred the parameters of MASK R-CNN, which had been trained on the COCO dataset into the model to continue the defect dataset for training. Parameter settings of model trainings were set (Table 1).
Table 1. Setting of Model Parameters
To prevent memory explosion, batch training was adopted. The batch size was 16, that is, 16 pictures were extracted from the training set each time for training. The learning rate was set to 0.001, a total of 30 epochs were trained, and each epoch needed to train 100 steps. In other words, the entire model trained 3,000 steps.
Accuracy analysis
At the end of the training, a mathematical model was obtained to detect wood defects. Then, 320 validation set pictures by this model were tested and verified. The performance of the model was evaluated by means of mAP (mean average precision) (Silven et al. 2003). The mAP of a datatset is the average value of each type of AP, and the AP of each type is calculated by the area under the accuracy/recall curve. The specific calculation formula is as follows:
(6)
It can be seen from the formula that is the number of training images with defects detected, is the detection accuracy of the image i, determines whether image i is classified correctly. The authors adopted the traditional constant matrix feature and geometry feature as the BP neural network and the characteristics of SOM neural network input by a contrast experiment. The CNN was also used to train and identify the dataset. Different mathematical models were trained and tested in the unexpanded dataset and the dataset expanded by PGGAN. Ten experiments were performed and the average accuracy was recorded. The results of the experiments are shown in Table 2.
Table 2. Comparison of the Accuracy of Different Methods
Through training 80% of the dataset and testing the remaining 20%, the accuracy of the model trained from scratch on the unexpanded dataset was only able to reach 92.6%, while the accuracy of the model trained on the expanded dataset by PGGAN reached 94.7%.
The application of transfer learning can make the model accuracy of the unexpanded dataset and the expanded dataset reach 96.3% and 98.4%, respectively. The use of PGGAN for detailed dataset expansion and transfer learning can increase the accuracy of the model prediction. However, the experimental results of traditional methods were not satisfactory. The model accuracy of the BP neural network on the unexpanded dataset and the expanded dataset were 90.2% and 93.7%, respectively. The model accuracy of the SOM network in the unexpanded dataset and expanded dataset was 85.3% and 86.1%, respectively. The convolutional neural network also adopted the ResNet50 architecture, and the accuracy on the unexpanded dataset and the expanded dataset was 88.3% and 94.5%, respectively.
Confusion matrix and train loss analysis
Meanwhile, the confusion matrix was used to analyze the experimental results. The abscissa of the confusion matrix is the predicted value of the model for defects, and the ordinate is the real situation of defects (Rojas-Espinoza and Ortiz-Iribarren 2010). Moreover, the accuracy of each kind of prediction can be analyzed according to the confusion matrix, which shows the imbalance of samples. The loss function of MASK R-CNN is composed of three parts. The change of the loss function and its composition in the training process of the model are shown (Fig. 4).
Fig. 4. (a) Confusion matrix of wood defect classification and (b) loss plot during training
It can be seen from the confusion matrix that the validation set contained 106 insect holes, 102 dead knots, and 112 live knots. According to the prediction results of the model, the prediction accuracy of insect hole, dead knot, and live knot was 99.05%, 97.05%, and 99.10%, respectively. In other words, the mAP was 98.4%. It can be seen that the expanded dataset with PGGAN had a great improvement in defect identification based on MASK R-CNN under the strategy of transfer learning compared with the traditional classification method.
As shown in Fig. 4b, after adopting the strategy of transfer learning, the change of the total loss value was divided into three stages: (1) the loss value of the first 500 steps declined rapidly; (2) the loss value of steps from 500 to 1500 declined slowly; (3) step tended to be stable from 1500 to 3000 times. The total loss value of the model was stable at 0.4003.
Mask generation
The pictures in the validation set were randomly selected and tested. To ensure the feasibility of the validation results, the validation set was guaranteed to contain three kinds of defects. According to the algorithm structure of MASK R-CNN, the categories of defects can be identified, and box selection and mask generation can be performed. The test results are shown in Fig. 5.
In addition, the error recognition examples of this detection method were also analyzed. As shown in Fig. 5, part d, there was an incomplete knot and a crack running through it. These factors affect the feature extraction results of the convolution layer and lead to unsatisfactory recognition results.
The experimental results show that unlike Faster R-CNN, which can only frame and select wood defects, MASK R-CNN provided an additional mask branch. Based on instance segmentation, the overall contour of a detected object can be obtained and labeled. It can be seen that the detection and identification accuracy of MASK R-CNN for wood defects was statistically improved.
Fig. 5. Some examples of mask pictures obtained by defect detection showed that insect hole and dead knot were detected (a), with confidence of 97.1% and 99.7%, respectively. The confidence of detected segment in (b) was 99.2%.The confidence of detected dead knot in (c) was 99.7%. The confidence of detected dead knot in (d) was 99.9%.
Limitation
While this research provides a contribution to recognizing the dead knots, live knots, and insect holes in polar veneer, it has several limitations that need to be acknowledged and addressed. Poplar veneer has many other detections, such as crack and stain which influent the beauty and practicality of poplar veneer. In addition, the dataset used in our study is not very large, and overfitting may occur. Our future research will focus on the following areas: 1. Detect and identify other common defects of poplar veneer; 2. Continuously expanding the data sets required by the experiment.
CONCLUSIONS
- The MASK R-CNN algorithm model based on ResNet50 was created to identify the defects of poplar veneer. In this experiment, the performance classification accuracy of the expanded dataset tested after pre-training was up to 98.4%. Compared with traditional defect detection methods, the classification accuracy of the model established by MASK R-CNN algorithm combined with transfer learning strategy was higher.
- Transfer learning strategy was used to improve the accuracy of the detection model with using a small number of dataset.
- Different from the traditional image expansion, progressive growing generative adversarial network (PGGAN) was adopted in this paper to expand wood defect dataset. This method can expand the defect details, which improved the diversity of defect images and balance the sample distribution.
- This technology is expected to be applied to wood processing equipment, especially wood classification equipment.
ACKNOWLEDGMENTS
This study was funded by the “12th Five-Year” National Science and Technology Support Program (2012BAD24B010202), and the Priority Academic Program Development of Jiangsu Higher Education Institutions (PAPD).
REFERENCES CITED
An, N., Welch, S. M., Markelz, R. J. C., Baker, R. L., Palmer, C. M., Ta, J., Maloof, J. N., and Weinig, C. (2017). “Quantifying time-series of leaf morphology using 2D and 3D photogrammetry methods for high-throughput plant phenotyping,” Computers and Electronics in Agriculture 135, 222-232. DOI: 10.1016/j.compag.2017.02.001
Behr, M., McNabb, E., Noseworthy, M., Sidkar, S., and Kumbhare, D. (2019). “Automatic ROI placement in the upper trapezius muscle in B-mode ultrasound images,” Ultrasonic Imaging 41(4), 231-246. DOI: 10.1177/0161734619839980
Castellani, M., and Rowlands, H. (2009). “Evolutionary artificial neural network design and training for wood veneer classification,” Engineering Applications of Artificial Intelligence 22(4-5), 732-741. DOI: 10.1016/j.engappai.2009.01.013
Cetiner, I., Var, A. A., and Cetiner, H. (2014). “Wood surface analysis with image processing techniques,” in: 22nd Signal Processing and Communications Applications Conference (Siu), Trabzon, Turkey, pp. 393-396.
Chang, Z., Cao, J., and Zhang, Y. (2018). “A novel image segmentation approach for wood plate surface defect classification through convex optimization,” Journal of Forestry Research 29(6), 1789-1795. DOI: 10.1007/s11676-017-0572-7
Collier, E., Duffy, K., Ganguly, S., Madanguit, G., Kalia, S., Shreekant, G., Nemani, R., Michaelis, A., Li, S., Ganguly, A., et al. (2018). “Progressively growing generative adversarial networks for high resolution semantic segmentation of satellite images,” in: 18th IEEE International Conference on Data Mining Workshops (ICDMW), Singapore, pp. 763-769. DOI: 10.1109/Icdmw.2018.00115
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. (2014). “Generative adversarial nets,” in: Advances in Neural information Processing Systems (NIPS), pp. 2672-2680.
Gu, I. Y. H., Andersson, H., and Vicen, R. (2009). “Automatic classification of wood defects using support vector machines,” Computer Vision and Graphics 5337, 356-367. DOI: 10.1007/978-3-642-02345-3_35
Harer, J. A., Ozdemir, O., Lazovich, T., Reale, C. P., Russell, R. L., Kim, L. Y., and Chin, P. (2018). “Learning to repair software vulnerabilities with generative adversarial networks,” in: Advances in Neural Information Processing Systems (NIPS), Volume 31, Montreal, Canada.
He, T., Liu, Y., Xu, C. Y., Zhou, X. L., Hu, Z. K., and Fan, J. N. (2019). “A fully convolutional neural network for wood defect location and identification,” IEEE Access 7, 123453-123462. DOI: 10.1109/Access.2019.2937461
He, T., Liu, Y., Yu, Y., Zhao, Q., and Hu, Z.K. (2020).” Application of deep convolutional neural network on feature extraction and detection of wood defects,” Measurement, 152, 107357. DOI: 10.1016/j.measurement.2019.107357
Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2017). “ImageNet classification with deep convolutional neural networks,” Communications of the ACM 60(6), 84-90. DOI: 10.1145/3065386
Li, J., Zhao, J. X., Li, J., and Ma, Y. D. (2017). “Using channel feature with RPN and SVM for pedestrian detection,” in: International Conference on Computer Science and Application Engineering (CSAE), Volume 190, Shanghai, China, pp. 874-881.
Mahram, A., Shayesteh, M. G., and Jafarpour, S. (2012). “Classification of wood surface defects with hybrid usage of statistical and textural features,” in: 35th International Conference on Telecommunications and Signal Processing (TSP), Prague, Czech Republic, pp. 749-752.
Nguyen, D. H., Le, T. H., Tran, T. H., Vu, H., Le, T. L., and Doan, H. G. (2018). “Hand segmentation under different viewpoints by combination of Mask R-CNN with tracking,” in: Proceedings of the 2018 5th Asian Conference on Defense Technology (ACDT), Hanoi, Vietnam, pp. 14-20.
Niskanen, M., and Silven, O. (2003). “Comparison of dimensionality reduction methods for wood surface inspection,” in: Proceedings Volume 5132, Sixth International Conference on Quality Control by Artificial Vision, Gatlinburg, TE, USA, pp. 178-188. DOI: 10.1117/12.514959
Qin, X. R., Zhou, Y. F., He, Z. Q., Wang, Y. T., and Tang, Z. (2017). “A faster R-CNN based method for comic characters face detection,” in: 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR), Volume 1, Kyoto, Japan, pp. 1074-1080. DOI: 10.1109/Icdar.2017.178
Rojas-Espinoza, G., and Ortiz-Iribarren, O. (2010). “Identification of knotty core in Pinus radiata logs from computed tomography images using artificial neural network,” Maderas. Ciencia y Tecnología 12(3), 229-239. DOI: 10.4067/S0718-221×2010000300007
Samiappan, S., Prasad, S., and Bruce, L. M. (2011). “Automated hyperspectral imagery analysis via support vector machines based multi-classifier system with non-uniform random feature selection,” IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Vancouver, BC, Canada, pp. 3915-3918. DOI: 10.1109/Igarss.2011.6050087
Silven, I., Niskanen, M., and Kauppinen, H. (2003). “Wood inspection with non-supervised clustering,” Machine Vision and Applications 13(5-6), 275-285. DOI: 10.1007/s00138-002-0084-z
Togo, R., Ogawa, T., and Haseyama, M. (2019). “Synthetic gastritis image generation via loss function-based conditional PGGAN,” IEEE Access 7, 87448-87457. DOI: 10.1109/access.2019.2925863
Urbonas, A., Raudonis, V., Maskeliūnas, R., Damaševičius, R. (2019). “Automated identification of wood veneer surface defects using faster region-based convolutional neural network with data augmentation and transfer learning,” Applied Sciences 9(22), 4898.
Viguier, J., Marcon, B., Girardon, S., and Denaud, L. (2017). “Effect of forestry management and veneer defects identified by X-ray analysis on mechanical properties of laminated veneer lumber beams made of beech,” BioResources 12(3), 6122-6133. DOI: 10.15376/biores.12.3.6122-6133
Yang, D., Jackson, M. R., and Parkin, R. M. (2006). “Inspection of wood surface waviness defects using the light sectioning method,” Proceedings of the Institution of Mechanical Engineers Part I-Journal of Systems and Control Engineering 220(7), 617-626. DOI: 10.1243/09596518jsce175
Yang, G. L., Zhang, Y. D., Yang, J. Q., Ji, G. L., Dong, Z. C., Wang, S. H., Feng, C., and Wang, Q. (2016). “Automated classification of brain images using wavelet-energy and biogeography-based optimization,” Multimedia Tools and Applications 75(23), 15601-15617. DOI: 10.1007/s11042-015-2649-7
Yang, Z., Yuan, Y., Zhang, M., Zhao, X., Zhang, Y., and Tian, B. (2019). “Safety distance identification for crane drivers based on Mask R-CNN,” Sensors (Basel) 19(12), Article Number 2789. DOI: 10.3390/s19122789
Zhang, Y. Z., Xu, C., Li, C., Yu, H. L., and Cao, J. (2015). “Wood defect detection method with PCA feature fusion and compressed sensing,” Journal of Forestry Research 26(3), 745-751. DOI: 10.1007/s11676-015-0066-4
Article submitted: October 28, 2019; Peer review completed: February 29, 2020; Revised version received: March 7, 2020; Accepted: March 8, 2020; Published: March 16, 2020.
DOI: 10.15376/biores.15.2.3041-3052