NC State
BioResources
Zhao, G., He, X., Cai, J., Deng, Z., and Liu, D. (2024). “Virtual display of wooden furniture cultural relics based on laser and CT scanning technology,” BioResources 19(3), 4502-4516.

Abstract

The 3D reconstruction and virtual display of wooden furniture cultural relics were investigated using laser scanning and CT scanning techniques. The suitability of different 3D reconstruction techniques and virtual display approaches were considered. The experiments demonstrated that digital models obtained from both laser scanning and CT scanning can be integrated seamlessly into virtual environments created with 3DMAX for exhibition purposes. Additionally, post-processing software, such as PR or AE, can be utilized to synthesize virtual display video. The resulting images exhibit self-adaptation capabilities, with clear and undistorted 3D model and texture image. Moreover, other types of scanned models are suitable for 3D micro-scale model printing, although CT-based models tend to achieve higher printing accuracy compared to those generated by laser scanning technology. However, the precision of 3D printing model is contingent upon factors such as the precision of digital model, the type of printer, and the printing material.


Download PDF

Full Article

Virtual Display of Wooden Furniture Cultural Relics Based on Laser and CT Scanning Technology

Guiling Zhao,a Xi He,a Jiaqing Cai,a Zhongji Deng,b,* and Dan Liu a,*

The 3D reconstruction and virtual display of wooden furniture cultural relics were investigated using laser scanning and CT scanning techniques. The suitability of different 3D reconstruction techniques and virtual display approaches were considered. The experiments demonstrated that digital models obtained from both laser scanning and CT scanning can be integrated seamlessly into virtual environments created with 3DMAX for exhibition purposes. Additionally, post-processing software, such as PR or AE, can be utilized to synthesize virtual display video. The resulting images exhibit self-adaptation capabilities, with clear and undistorted 3D model and texture image. Moreover, other types of scanned models are suitable for 3D micro-scale model printing, although CT-based models tend to achieve higher printing accuracy compared to those generated by laser scanning technology. However, the precision of 3D printing model is contingent upon factors such as the precision of digital model, the type of printer, and the printing material.

DOI: 10.15376/biores.19.3.4502-4516

Keywords: Laser scan; CT scan; Wooden furniture relics; Three-dimensional reconstruction; Virtual exhibition; 3D Printing

Contact information: a: Northeast Agricultural University, Harbin, P.R. China, 150040; b: Northeast Forestry University, Harbin, P.R. China, 150040; * Corresponding authors: 635353445@qq.com; 1442460795@qq.com

GRAPHICAL ABSTRACT

 

INTRODUCTION

Wooden artifacts constitute a significant portion of various museum collections. However, due to the unique physical properties of wood, issues such as cracking, decay, wear, tear, and moth-eaten condition are common (Zhao et al. 2018, 2021). Therefore, these valuable cultural relics are often not allowed to be touched, and only replicas or limited displays are allowed. Finding multi-faceted and more realistic methods to convey information about these artifacts is not only essential for advanced scientific research and promoting culture heritage but also holds important applications and innovative value in Virtual Reality (VR) technology itself. Additionally, it can greatly enhance the digital displays within museums settings.

Recently, several researchers have conducted research on laser scanning and CT scanning. However, three-dimensional (3D) reconstruction, based on laser scanning, is mainly concentrated on cultural relic studies. For instance, Tu et al. (2019) developed a digital 3D reconstruction system for cultural relics based on laser scanning, overcoming issues such as missing internal cavity information and texture distortion during the reconstruction process. Moreover, Huo and Yu (2020) designed a stereoscopic vision system based on 2D image modeling technology. Xi et al. (2020) developed an approach based on laser scanning combined with the normal vector of space triangle to construct a triangle net. To do so, they considered the curvature of the surface of the space triangle net as the constraint condition, constructed the space triangle net of the object surface, and then built the model. Furthermore, Tong et al. (2023) conducted a laser scanning-based 3D reconstruction of the “Damo Duo” colored sculpture at Lingyan Temple, focusing on analyzing color and virtual restoration of textures.

3D reconstruction based on CT scanning is primarily applied in the medical field. For example, Di et al. (2020) performed CT scans on the pelvis utilizing 3D imaging software to evaluate changes in the pelvic shape and volume. Moreover, Ignatius et al. (2023) studied the 3D model printing of the spine and pelvis regions based on CT datasets. Furthermore, scholars also conducted research on algorithm optimization in this domain (Yan et al. 2021; Wang and Zhao 2023).

Research achievements in virtual exhibition of cultural relics are interesting, and related studies can be summarized into two main areas. First, there has been research on virtual applications and platforms: Goh and Wang (2004) explored the potential of future virtual exhibitions by leveraging the development experience of the Scalable Vector Graphics (SVG) exhibition version and analyzing SVG as a substitute for Flash. Moreover, Gu (2005) proposed a system structure model for virtual exhibition platforms in analyzing the requests of the exhibition industry for constructing virtual exhibition platforms. In addition, Choi and Kim (2017) deployed content services for visitors’ museum experiences by combining beacons and HMD. Finally, Ciurea and Filip (2019) described several approaches to implement virtual exhibitions using the MOVIO platform and the Android operating system. Second, there has been research on audience emotions and satisfaction with virtual exhibitions. Lin et al. (2020) explored the emotions expressed by the audience during the art appreciation process in desktop VR or Head-Mounted Display Virtual Reality (HMD VR). In addition, Xia et al. (2023) studied the role of virtual exhibition attributes in promoting exhibitor enthusiasm and satisfaction. Karnchanapayap (2023) evaluated the efficiency and satisfaction of VR experiences through audience participation in a virtual amusement park. Furthermore, Chung et al. (2024) compared user experiences in reality-based and virtual-based VR exhibition settings. Finally, Sylaiou et al. (2024) enhanced user experiences based on the visions of artists and curators, Augmented Reality (AR), and visitor demands, and analyzed relevant evaluation criteria. To sum up, research on laser scanning, CT scanning, virtual exhibition, and even 3D printing have been relatively isolated, with limited comprehensive studies bridging these different areas.

EXPERIMENTAL

Experimental Preparation

To compare the experimental results, laser scanning and CT scanning were employed for the specimens in this study. In the former experiment, the EVA scanner was initially employed to perform physical scanning. However, due to the overly complex shape of the specimen, some vertical direction carving information could not be fully obtained. Therefore, the HandySCAN3D scanner, having a higher accuracy, was used to collect the sample data. The workflow for both laser scanners was essentially similar, with a major difference in the level of the specimen scanned with the EVA scanner. Those measurements do not require the placement of reference points. This is in contrast to specimens scanned with the HandySCAN3D, where higher accuracy was obtained.

Laser scanning steps

The specific operational steps are as follows:

Step 1: Clean the surface of the specimen before scanning. For HandySCAN3D scanning, specimens are affixed with reference points. The density of these points should be controlled within a 10 cm² area, having a minimum of four reference points, as illustrated in Fig. 1(a).

Step 2: Prepare the EVA and HandySCAN3D scanners. As depicted in Figs. 2 and 3, collecting specimen data is initiated using the laser scanner. The scanning range is adjusted according to equipment prompts during scanning, maintaining a distance between 0.4 to 0.6 m. While scanning, the angle of the scanner should be perpendicular to the specimen as far as possible.

Step 3: Import the scanned point cloud data into Geomagic software. Manually, the point cloud data is merged from multiple scans. The reverse engineering reparation, such as noise removal and hole filling, is performed. Then the data compression is achieved.

Step 4: Save data after scanning is complete, and check whether the data quality meets standards. Data is stored as the default format after the target is met.

Step 5: Import model into 3DMAX Virtual Scene. The model is imported in OBJ or STL format into the 3DMAX virtual scene. Moreover, the position, display scale, and other parameters are adjusted according to the requirements.

Step 6: Surface material attachment and 3D roaming. Surface materials are attached to the model and 3D roaming is achieved. The frame sequence image files are extracted in PNG format.

Step 7: Finalization. The document obtained in step 6 is imported into PR software for dubbing, text, special effects, and other production, merging to generate roaming video.

Table 1. Main Parameters of EVA Scanner

Table 2. Main Parameters of the Handyscan Scanner

CT scanning steps

The procedure is as follows:

Step 1: Place specimen N2 in the Philiops16 chamber, and set the CT scan parameters as specified in Table 1.

Step 2: Import all specimen images obtained from the CT scan in DICOM format into Matlab software. The median filtering is applied to each image for noise reduction.

Step 3: Utilize MATLAB (Massachusetts, USA) on the Windows 7 operating system (Washington state, USA) to segment each image and convert it to the BMP format.

Step 4: Employ a self-developed medical imaging system based on the Visualization ToolKit (VTK) for 3D reconstruction of the specimen CT images using surface rendering.

Step 5: Import the model in OBJ or STL format into the 3DMAX virtual scene. The position, display scale, and other parameters are adjusted as needed.

Step 6: Attach surface materials to the model and complete a 3D roam. The frame sequence image files are exported in the PNG format.

Step 7: The document obtained in step 6 is imported into PR software for dubbing, text, special effects and other production. It is also merged to generate roaming video.

Materials

A total of five samples were used. The size, material, carving process, and other related information of the specimen are displayed in Table 4.

Table 3. Philips 16 Row CT Scan Parameters

Analysis of Laser Scanning Experiments

During the laser scanning process, multiple viewpoints must be set up to collect the 3D model of the specimen. The 3D data, measured from each viewpoint, is in a local coordinate system relative to the viewpoint itself. The combination of the 3D data obtained from each viewpoint into one coordinate system can just allow data completion of the cultural relic. The experiment employs a typical Iterative Closest Point (ICP) algorithm for precise registration, requiring that the 3D data point sets of each viewpoint overlap and the registration unit is a point. The basic principle of ICP algorithm (Gao 2015) consists of assuming that there are two sets of point cloud data in 3D space R3, namely point sets PL and PR. They are represented as follows,

(1)

(2)

where n is the number of points in the point set. The points in point set PL correspond one-to-one with each other in point set PR through the 3D space. Further, the single point transformation relationship is as follows,

(3)

(4)

where q0, qx, qy, and qz in the parameter vector of Eq. (4) are called quaternion parameters, satisfying the constraint conditions (Wang et al. 2018):

(5)

According to the initial value of iteration X0, the new point set Pi is calculated based on Eq. (4):

(6)

Referring to Eq. (6), P represents the original unmodified point set, the subscript of Pi represents the number of iterations, and the initial value X of the X0 parameter vector is (Zhang 2015).

Table 4. Basic Condition of the Specimen

Based on the feature registration method mentioned above, the next step is to import all the 3D data in ASC format of point cloud data into Geomajic software. The manifold function module of the software is used for denoising. After that, the entire special coordinate system is registered and aligned. The data are integrated to generate a 3D mesh model that can be used for data compression. For example, the 3D mesh model for specimen N1, obtained using HandySCAN scanning, has 1.47 million faces. For ease of subsequent processing, it is necessary to compress the 3D data. Compression should involve simplifying the number of polygons while preserving surface details and colors. Therefore, Fig. 1(b) illustrates the situation before compression with about 1,470,000 faces, while Fig. 1(c) shows the situation after compression, reduced to about 183,000 faces, yielding in a reduction of about 87.5% in data. From Fig. 1(c), it can be observed that the model maintained the edge features in an excellent state after compression. Then, the point cloud data is synthesized into a whole, which involves generating the overall surface, also known as surface reconstruction, to obtain a physical digital model.

(a)                                          (b)                                             (c)

Fig. 1. (a) Attachment point of specimen N1, (b) Data before compression (1467594 Faces) and (c) Data after compression (183449 Faces)

For specimen N2, a 3D model was obtained using the EVA scanner, as shown in Fig. 2(a). Comparing Fig. 1(b) and Fig. 2(a), numerous green point clouds are visible along the carved edges in Fig. 2(a), highlighting data loss during the scanning process. Referring to Fig. 2(b), wood grain on the specimen’s surface is vaguely visible, whereas Fig. 1(c) lacks surface texture. This difference is due to variations in texture mapping between the two laser scanners. The 3D model file size for processed specimen N1 is 200 kB and can be displayed in a web format, allowing users to quickly browse and control the 3D model. The same processing method was applied to specimens N4 and N5, scanned using the HandySCAN.

(a)                                                                                       (b)

Fig. 2. (a) Model of specimen N2 after scanning and (b) Model of N2 specimen after manual processing

Analysis of CT scanning experiments

Due to similar grayscale output of the target objects and the interconnection of some targets in CT images of wooden specimens, it is suitable to apply the improved 3D TV-L1 algorithm. This image processing technique has been introduced by Zhao et al. (2021); thus, in this paper, a brief overview of the experimental process is provided.

To validate the effectiveness of this method, image segmentation experiments were conducted using MATLAB running on the Windows 7 operating system. A CT image data from wooden specimen N3 was selected, and the wooden model portion was extracted, resulting in a sequence of 690 images. As an example, the image processing was performed on the 155th original image data, as shown in Fig. 3(a).

(a)                                                       (b)                                                        (c)

Fig. 3. (a) 155th image of the original CT specimen, (b) 155th CT image after filtering, and (c) 3DTVL1 segmentation treated result

First, the N2 wooden model image was subjected to denoising and edge-preserving processing through bilateral filtering, resulting in Fig. 3(b). It is evident that the grayscale in the specimen area was uniform, and the organizational boundaries were clear and smooth. Subsequently, referring to Fig. 3(b), the 3D TV-L1 algorithm yielded in the separation of the wooden model and non-wooden model portions of the N3 specimen, as shown in Fig. 3(c).

(a)                                          (b)

Fig. 4. (a) Surface rendering result and (b) volume rendering result

Image 3D reconstruction involves two approaches: volume rendering and surface rendering. The first one involves processing a series of 2D images through segmentation techniques, such as boundary recognition, to reconstruct the 3D model of the inspected object, and present it on a surface projection (Zhao et al. 2021). The virtual display only requires the specimen appearance model, so it only needs surface rendering, as shown in Fig. 4(a). If virtual segmentation or non-destructive testing of the specimen’s interior from arbitrary angles and positions is required, it must be performed using volume rendering (Zhao et al. 2021), as shown in Fig. 4(b). 3D reconstruction is not only a necessary process for 3D printing but also facilitates data archiving. The reconstructed model can be imported into the 3DMAX software for virtual scene modeling.

Virtual exhibition based on laser and CT scanning 3D reconstruction

The conventional geometric modeling in 3DMAX requires lots of time for complex objects. It is also challenging for precise 3D reconstruction of irregular specimens with intricate details such as engravings and reliefs. To ensure that the exhibition effect is realistic to the actual objects, the 3DMAX tool is chosen to construct 3D exhibition scenes. This paper virtual display production steps can be divided into 3D modeling, map production, 3D roaming, and synthetic virtual video where the specific steps are proposed below.

Fig. 5. Model imported exhibition scene

In this paper, virtual display production steps can be divided into 3D modeling, map production, 3D roaming, and synthetic virtual video, where the specific steps are as follows.

The first step consists of gradually creating the virtual scene based on the envisioned design. This involves designing the booth, wall decorations, ceiling shapes, functional and decorative lighting positions, and illumination levels. Consequently, the model is imported in OBJ or STL format into the scene, and position adjustments and display scale are performed as needed. Moreover, Fig. 5 illustrates the effects after importing specimens N3 and N5 into the scene, while Fig. 6 displays the 3D model after importing specimen N4.

(a)                                                        (b)                                                                              (c)

Fig. 6. (a) Model of N4 specimen after importing 3DMAX, (b) Unfolded diagram of the physical appearance texture of N4 specimen, and (c) N4 specimen texture effect

The second step consists of attaching materials to the scene models. All models obtained in the early stage of this experiment can be performed using the software’s built-in materials or can be attached with bitmaps. HandySCAN3D scanned specimens apply texture mapping based on textures to obtain models that are highly consistent with the physical appearance. However, due to the high cost of texture mapping experiments and the inability of the 3D model imported based on CT technology to perform the physical appearance texture in the preliminary experiments, as well as the consistent format of the 3D model imported from the above two experiments, the next attempt was to use baking technology to complete the appearance texture of specimen N4. This technique process is described as follows:

The N4 model is first exploded in 3DMAX using a UV editor to generate baked unfolded images of each obvious turning face. Consequently, PNG format images are exported. The second step consists of importing the PNG format baking image into Photoshop. In this software, high-definition physical images of each façade are used, corresponding to the split images of the baking unfolded interfaces, and they are replaced one by one. When performing this task, attention was paid to the correspondence between light, shadow, and texture with the baking unfolded image. In step three, the TIF image format is exported from the PS software, and then it is imported into 3DMAX software in the form of material maps for UV mapping of the model. The EVA laser scanning model imported into the scene can performed using bitmap. The bitmap is the Atlas UV unfolded image generated after processing with Artec studio software during scanning; thus, it is highly consistent with the physical appearance after pasting, as shown in Fig. 6c.

The third step comprises the virtual video production. Firstly, images are rendered in 3DMAX, each being 800×480 pixels. This video lasts 1 minute and 48 seconds, requiring a total of 2880 images, where each image taking 16 minutes. Due to the large size of the combined scene and model, the total rendering time exceeds 700 hours. Therefore, after rendering, the frame sequence images is exported in PNG format and the PR software is used for video synthesis. During the synthesis process, effects, such as voiceovers, text, and special effects, can be added. The final result is a virtual video with a size of 3600 MB. A screenshot of the virtual video is shown in Fig. 7.

Fig. 7. Virtual video capture (the 26th second)

RESULTS AND DISCUSSION

N1, N4, and N5 specimens were scanned by Handyscan scanner whereas N2 specimens were scanned using EVA scanner. In the Geomagic software, such images can be obtained using manual repair of the higher accuracy of the digital model. The higher the accuracy of the scanner is, the more ideal the data information obtained from the specimens will be. Compared to all digital model images, the CT scanning of the N3 specimen digital model accuracy was higher.

All these digital models can be imported into 3D and PR software for subsequent virtual exhibition production. The videos produced using this technology exhibited adaptive capabilities, allowing distortion-free local magnification. Moreover, 3D models based on laser scanning, due to their texture mapping function, can achieve an appearance texture consistent with the actual object. However, models based on CT scanning, when applying volume rendering, can capture information including internal wood rays and wood textures of the specimens. Nevertheless, neither surface nor volume rendering models can capture the external appearance information of the specimens. This limitation can be addressed by applying the external appearance texture attachment method used for specimen N4 to achieve consistency with the actual appearance. The parameters of the 3D printer used in this experiment are shown in Tables 5 and 6.

Table 5. Parameters of SS2 3D and FDM 3D Printer Parameters

Table 6. Parameters of Nylon 3D Printer

Table 7. 3D Printing Results of 3D Digital Models Based on Laser and CT Scanning

Digital models based on laser scanning and CT scanning can be used for 3D model printing, and the required steps are similar. Both require exporting the 3D reconstruction model to STL format, then importing it into slicing software to generate Gcode files for 3D printing. To compare the printing model accuracy, N2 and N5 specimens, having similar processes and scales, were selected for printing. Firstly, both specimens were printed using an SS2 3D printer, while applying the Poly-Lactic Acid (PLA) as the printing material. Comparing the printing model accuracy, it was found that the N2 model from CT scanning had higher accuracy, as illustrated in Table 5. Due to the complex shape of the N5 specimen, the coordinate acquisition for the hollow part in the depth direction was not achieved, and despite manual repairs, the model accuracy was still not ideal. This is the primary reason for the relatively rough 3D printing model. Next, the accuracy variation of the N2 digital model, using different printers and printing materials, was compared. The N2 specimen was printed using both a nylon printer and an FDM printer for nylon and wood-plastic composite material printing, respectively, having the 3D printer’s forming flow rate set to the highest parameter. Referring to the printed models, the accuracy of the nylon model was somewhat lower, whereas the accuracy of the PLA model was in the middle, and the accuracy of the wood-plastic composite material model was the highest, as displayed in Table 7.

CONCLUSIONS

  1. The technology used in this paper can improve the exhibition rate and effect of cultural relics. Virtual display videos can be supplemented with text, sound, and other means in practical applications to stimulate the audience with multi-sensory information, overcoming the shortcomings of single physical display methods and lack of inspiration. This is conducive to the audience’s full understanding of cultural relics collections. Virtual display videos have good compatibility and are commonly used image display devices in museums. It can also be used in online museums, virtual museums, and other forms. The digitization of cultural relics not only enables 3D representation and virtual display, but it also yields accurate digital models preserving the original and authentic 3D and textural information of the relics. This provides crucial data and model support for the restoration and repair of cultural relics.
  2. The virtual display video production requires a significant amount of time; this video, lasting 1 minute and 48 seconds with a file size of 179,933 KB, took over 700 hours for rendering and post-production synthesis when running on a desktop computer. Further research is needed to explore ways to create virtual exhibitions quickly and efficiently. Additionally, there is a pressing need to enhance both 3D printing materials and technology. This enhancement helps to achieve 3D model completely consistent with the texture and model of test-piece, facilitating highly realistic displays in museums.

ACKNOWLEDGEMENTS

This work was supported by “Heilongjiang Provincial Philosophy and Society Late-stage funding Project (grant number 2020YSB056)”, China.

REFERENCES CITED

Choi, H., and Kim, S. H. (2017). “A content service deployment plan for metaverse museum exhibitions – Centering on the combination of beacons and HMDs,” International Journal of Information Management 37(1), 1519-1527. DOI: 10.1016/j.ijinfomgt.2016.04.017

Chung, S. J., Kim, S. Y., and Kim, K. H. (2024). “Comparison of visitor experiences of virtual reality exhibitions by spatial environment,” International Journal of Human-Computer Studies 181, article 103145. DOI: 10.1016/j.ijhcs.2023.103145

Ciurea, C., and Filip, F. G. (2019). “Virtual exhibitions in cultural institutions: Useful applications of informatics in a knowledge-based society,” Studies in Informatics and Control 28(1), 55-63. DOI: 10.24846/v28i1y201906

Di Laura, A., Henckel, J., and Hart, A. J. (2020). “The effect of metal artefact on the design of custom 3D printed acetabular implants,” 3D Printing in Medicine 6(1), 1-11. DOI: 10.1186/s41205-020-00074-5

Huo, J., and Yu, X. (2020). “Three-dimensional mechanical parts reconstruction technology based on two-dimensional image,” International Journal of Advanced Robotic Systems 17(2), 1729881420910008. DOI: 10.1177/172988142091000

Ignatius, D., Alkhatib, Z., and Sabet, M. (2023). “Radiotherapy planning of spine and pelvis using single-energy metal artifact reduction corrected computed tomography sets,” Physics & Imaging in Radiation Oncology 26, article 100449. DOI: 10.1016/j.phro.2023.100449

Gao, H. (2015). Research on 3D digitization of cultural relics based on laser scanning, Master’s Thesis, Northeast Forestry University.

Goh, D. H. L., and Wang, J. C. E. (2004). “Designing a virtual exhibition using Scalable Vector Graphics,” Aslib Proceedings 56(3), 144-155. DOI: 10.1108/00012530410539331

Gu, Lin., Li, L., and Su, J. (2005). “Research and implementation of virtual exhibition platform based on VR,” Application Research of Computers 22(10), 202-205.

Karnchanapayap, G. (2023). “Activities-based virtual reality experience for better audience engagement,” Computers in Human Behavior 146, article 107796. DOI: 10.1016/j.chb.2023.107796

Lin, C., Chen, S., and Lin, R. (2020). “Efficacy of virtual reality in painting art exhibitions appreciation,” Applied Sciences-Basel 10(9), article 3012. DOI: 10.3390/app10093012

Sylaiou, S., Dafiotis, P., and Fidas, C. (2024). “From physical to virtual art exhibitions and beyond: Survey and some issues for consideration for the metaverse,” Journal of Cultural Heritage 66, 86-98. DOI: 10.1016/j.culher.2023.11.002

Tong, Y., Cai, Y., Austin, N. and Ma, Q. (2023). “Digital technology virtual restoration of the colours and textures of polychrome Bodhidharma statue from the Lingyan Temple, Shandong, China,” Heritage Science 11(12), 1-18. DOI: 10.1186/s40494-023-00858-y.

Tu, D., Lan, H., and Zhang, X. (2019). “Digital three-dimensional reconstruction technology of cultural relics,” Laser & Optoelectronics Progress 56(19), 1-7. DOI: 10.3788/LOP56.191504

Wang, Y., Wang, Y., and Han, X. (2018). “A unit quaternion based, point-linear feature constrained registration approach for terrestrial LiDAR point clouds,” Journal of China University of Mining & Technology 64(3), 671-677.

Wang, W., and Zhao, H. (2023). “3D reconstruction of movable cultural relics based on salient region optimization,” The Journal of China Universities of Posts and Telecommunications 30(5), 11-31.

Xi, W., Li, L., and Li, J. (2020). “Three-dimensional reconstruction of irregular cultural relics based on point cloud data: Taking Buddhist stone pilar as an example,” Bulletin of Surveying and Mapping (12), 87-89.

Xia, Q., Wang, S., and Jose, W. (2023). “The use of virtual exhibition to promote exhibitors’ pro-environmental behavior: The case study of Zhejiang Yiwu International Intelligent Manufacturing Equipment,” Expo.PLOS ONE.18 (11), article 294502. DOI: 10.1371/journal.pone.0294502

Yan, B., Chen, L., and Ma, H. L. (2021). “3D ultrasonic reconstruction of contour node represented voids and cracks,” NDT & E International 117, article 102382. DOI: 10.1016/j.ndteint.2020.102382

Zhang, B. (2015). Research on Spatial Registration for Three-Dimensional Laser Scanning Data, Master’s Thesis, Xi’an University of Science and Technology.

Zhao, G., Liu C., and Deng, Z. (2021). “3D morphology of internal defects in wooden products based on computed tomography,” BioResources 116(3), 6267-6280.

Zhao, G., Chris, R., Deng, Z., and Gong, J. (2018). “Carrying capacity and its implications in a Chinese Ancient Village: The case of Hongcun,” Asia Pacific Journal of Tourism Research 23(1), 1-21. DOI: 10.1080/10941665.2017.1421566

Article submitted: February 10, 2024; Peer review completed: April 6, 2024; Revised version received and accepted: April 27, 2024; Published: May 20, 2024.

DOI: 10.15376/biores.19.3.4502-4516