NC State
BioResources
Zhao, G., He, X., Cai, J., Deng, Z., and Liu, D. (2024). “Virtual display of wooden furniture cultural relics based on laser and CT scanning technology,” BioResources 19(3), 4502-4516.

Abstract

The 3D reconstruction and virtual display of wooden furniture cultural relics were investigated using laser scanning and CT scanning techniques. The suitability of different 3D reconstruction techniques and virtual display approaches were considered. The experiments demonstrated that digital models obtained from both laser scanning and CT scanning can be integrated seamlessly into virtual environments created with 3DMAX for exhibition purposes. Additionally, post-processing software, such as PR or AE, can be utilized to synthesize virtual display video. The resulting images exhibit self-adaptation capabilities, with clear and undistorted 3D model and texture image. Moreover, other types of scanned models are suitable for 3D micro-scale model printing, although CT-based models tend to achieve higher printing accuracy compared to those generated by laser scanning technology. However, the precision of 3D printing model is contingent upon factors such as the precision of digital model, the type of printer, and the printing material.


Download PDF

Full Article

Virtual Display of Wooden Furniture Cultural Relics Based on Laser and CT Scanning Technology

Guiling Zhao,a Xi He,a Jiaqing Cai,a Zhongji Deng,b,* and Dan Liu a,*

The 3D reconstruction and virtual display of wooden furniture cultural relics were investigated using laser scanning and CT scanning techniques. The suitability of different 3D reconstruction techniques and virtual display approaches were considered. The experiments demonstrated that digital models obtained from both laser scanning and CT scanning can be integrated seamlessly into virtual environments created with 3DMAX for exhibition purposes. Additionally, post-processing software, such as PR or AE, can be utilized to synthesize virtual display video. The resulting images exhibit self-adaptation capabilities, with clear and undistorted 3D model and texture image. Moreover, other types of scanned models are suitable for 3D micro-scale model printing, although CT-based models tend to achieve higher printing accuracy compared to those generated by laser scanning technology. However, the precision of 3D printing model is contingent upon factors such as the precision of digital model, the type of printer, and the printing material.

DOI: 10.15376/biores.19.3.4502-4516

Keywords: Laser scan; CT scan; Wooden furniture relics; Three-dimensional reconstruction; Virtual exhibition; 3D Printing

Contact information: a: Northeast Agricultural University, Harbin, P.R. China, 150040; b: Northeast Forestry University, Harbin, P.R. China, 150040; * Corresponding authors: 635353445@qq.com; 1442460795@qq.com

GRAPHICAL ABSTRACT

 

INTRODUCTION

Wooden artifacts constitute a significant portion of various museum collections. However, due to the unique physical properties of wood, issues such as cracking, decay, wear, tear, and moth-eaten condition are common (Zhao et al. 2018, 2021). Therefore, these valuable cultural relics are often not allowed to be touched, and only replicas or limited displays are allowed. Finding multi-faceted and more realistic methods to convey information about these artifacts is not only essential for advanced scientific research and promoting culture heritage but also holds important applications and innovative value in Virtual Reality (VR) technology itself. Additionally, it can greatly enhance the digital displays within museums settings.

Recently, several researchers have conducted research on laser scanning and CT scanning. However, three-dimensional (3D) reconstruction, based on laser scanning, is mainly concentrated on cultural relic studies. For instance, Tu et al. (2019) developed a digital 3D reconstruction system for cultural relics based on laser scanning, overcoming issues such as missing internal cavity information and texture distortion during the reconstruction process. Moreover, Huo and Yu (2020) designed a stereoscopic vision system based on 2D image modeling technology. Xi et al. (2020) developed an approach based on laser scanning combined with the normal vector of space triangle to construct a triangle net. To do so, they considered the curvature of the surface of the space triangle net as the constraint condition, constructed the space triangle net of the object surface, and then built the model. Furthermore, Tong et al. (2023) conducted a laser scanning-based 3D reconstruction of the “Damo Duo” colored sculpture at Lingyan Temple, focusing on analyzing color and virtual restoration of textures.

3D reconstruction based on CT scanning is primarily applied in the medical field. For example, Di et al. (2020) performed CT scans on the pelvis utilizing 3D imaging software to evaluate changes in the pelvic shape and volume. Moreover, Ignatius et al. (2023) studied the 3D model printing of the spine and pelvis regions based on CT datasets. Furthermore, scholars also conducted research on algorithm optimization in this domain (Yan et al. 2021; Wang and Zhao 2023).

Research achievements in virtual exhibition of cultural relics are interesting, and related studies can be summarized into two main areas. First, there has been research on virtual applications and platforms: Goh and Wang (2004) explored the potential of future virtual exhibitions by leveraging the development experience of the Scalable Vector Graphics (SVG) exhibition version and analyzing SVG as a substitute for Flash. Moreover, Gu (2005) proposed a system structure model for virtual exhibition platforms in analyzing the requests of the exhibition industry for constructing virtual exhibition platforms. In addition, Choi and Kim (2017) deployed content services for visitors’ museum experiences by combining beacons and HMD. Finally, Ciurea and Filip (2019) described several approaches to implement virtual exhibitions using the MOVIO platform and the Android operating system. Second, there has been research on audience emotions and satisfaction with virtual exhibitions. Lin et al. (2020) explored the emotions expressed by the audience during the art appreciation process in desktop VR or Head-Mounted Display Virtual Reality (HMD VR). In addition, Xia et al. (2023) studied the role of virtual exhibition attributes in promoting exhibitor enthusiasm and satisfaction. Karnchanapayap (2023) evaluated the efficiency and satisfaction of VR experiences through audience participation in a virtual amusement park. Furthermore, Chung et al. (2024) compared user experiences in reality-based and virtual-based VR exhibition settings. Finally, Sylaiou et al. (2024) enhanced user experiences based on the visions of artists and curators, Augmented Reality (AR), and visitor demands, and analyzed relevant evaluation criteria. To sum up, research on laser scanning, CT scanning, virtual exhibition, and even 3D printing have been relatively isolated, with limited comprehensive studies bridging these different areas.

EXPERIMENTAL

Experimental Preparation

To compare the experimental results, laser scanning and CT scanning were employed for the specimens in this study. In the former experiment, the EVA scanner was initially employed to perform physical scanning. However, due to the overly complex shape of the specimen, some vertical direction carving information could not be fully obtained. Therefore, the HandySCAN3D scanner, having a higher accuracy, was used to collect the sample data. The workflow for both laser scanners was essentially similar, with a major difference in the level of the specimen scanned with the EVA scanner. Those measurements do not require the placement of reference points. This is in contrast to specimens scanned with the HandySCAN3D, where higher accuracy was obtained.

Laser scanning steps

The specific operational steps are as follows:

Step 1: Clean the surface of the specimen before scanning. For HandySCAN3D scanning, specimens are affixed with reference points. The density of these points should be controlled within a 10 cm² area, having a minimum of four reference points, as illustrated in Fig. 1(a).

Step 2: Prepare the EVA and HandySCAN3D scanners. As depicted in Figs. 2 and 3, collecting specimen data is initiated using the laser scanner. The scanning range is adjusted according to equipment prompts during scanning, maintaining a distance between 0.4 to 0.6 m. While scanning, the angle of the scanner should be perpendicular to the specimen as far as possible.

Step 3: Import the scanned point cloud data into Geomagic software. Manually, the point cloud data is merged from multiple scans. The reverse engineering reparation, such as noise removal and hole filling, is performed. Then the data compression is achieved.

Step 4: Save data after scanning is complete, and check whether the data quality meets standards. Data is stored as the default format after the target is met.

Step 5: Import model into 3DMAX Virtual Scene. The model is imported in OBJ or STL format into the 3DMAX virtual scene. Moreover, the position, display scale, and other parameters are adjusted according to the requirements.

Step 6: Surface material attachment and 3D roaming. Surface materials are attached to the model and 3D roaming is achieved. The frame sequence image files are extracted in PNG format.

Step 7: Finalization. The document obtained in step 6 is imported into PR software for dubbing, text, special effects, and other production, merging to generate roaming video.

Table 1. Main Parameters of EVA Scanner

Table 2. Main Parameters of the Handyscan Scanner

CT scanning steps

The procedure is as follows:

Step 1: Place specimen N2 in the Philiops16 chamber, and set the CT scan parameters as specified in Table 1.

Step 2: Import all specimen images obtained from the CT scan in DICOM format into Matlab software. The median filtering is applied to each image for noise reduction.

Step 3: Utilize MATLAB (Massachusetts, USA) on the Windows 7 operating system (Washington state, USA) to segment each image and convert it to the BMP format.

Step 4: Employ a self-developed medical imaging system based on the Visualization ToolKit (VTK) for 3D reconstruction of the specimen CT images using surface rendering.

Step 5: Import the model in OBJ or STL format into the 3DMAX virtual scene. The position, display scale, and other parameters are adjusted as needed.

Step 6: Attach surface materials to the model and complete a 3D roam. The frame sequence image files are exported in the PNG format.

Step 7: The document obtained in step 6 is imported into PR software for dubbing, text, special effects and other production. It is also merged to generate roaming video.

Materials

A total of five samples were used. The size, material, carving process, and other related information of the specimen are displayed in Table 4.

Table 3. Philips 16 Row CT Scan Parameters

Analysis of Laser Scanning Experiments

During the laser scanning process, multiple viewpoints must be set up to collect the 3D model of the specimen. The 3D data, measured from each viewpoint, is in a local coordinate system relative to the viewpoint itself. The combination of the 3D data obtained from each viewpoint into one coordinate system can just allow data completion of the cultural relic. The experiment employs a typical Iterative Closest Point (ICP) algorithm for precise registration, requiring that the 3D data point sets of each viewpoint overlap and the registration unit is a point. The basic principle of ICP algorithm (Gao 2015) consists of assuming that there are two sets of point cloud data in 3D space R3, namely point sets PL and PR. They are represented as follows,

(1)

(2)

where n is the number of points in the point set. The points in point set PL correspond one-to-one with each other in point set PR through the 3D space. Further, the single point transformation relationship is as follows,

(3)

(4)

where q0, qx, qy, and qz in the parameter vector of Eq. (4) are called quaternion parameters, satisfying the constraint conditions (Wang et al. 2018):

(5)

According to the initial value of iteration X0, the new point set Pi is calculated based on Eq. (4):

(6)

Referring to Eq. (6), P represents the original unmodified point set, the subscript of Pi represents the number of iterations, and the initial value X of the X0 parameter vector is (Zhang 2015).

Table 4. Basic Condition of the Specimen

Based on the feature registration method mentioned above, the next step is to import all the 3D data in ASC format of point cloud data into Geomajic software. The manifold function module of the software is used for denoising. After that, the entire special coordinate system is registered and aligned. The data are integrated to generate a 3D mesh model that can be used for data compression. For example, the 3D mesh model for specimen N1, obtained using HandySCAN scanning, has 1.47 million faces. For ease of subsequent processing, it is necessary to compress the 3D data. Compression should involve simplifying the number of polygons while preserving surface details and colors. Therefore, Fig. 1(b) illustrates the situation before compression with about 1,470,000 faces, while Fig. 1(c) shows the situation after compression, reduced to about 183,000 faces, yielding in a reduction of about 87.5% in data. From Fig. 1(c), it can be observed that the model maintained the edge features in an excellent state after compression. Then, the point cloud data is synthesized into a whole, which involves generating the overall surface, also known as surface reconstruction, to obtain a physical digital model.

(a)                                          (b)                                             (c)

Fig. 1. (a) Attachment point of specimen N1, (b) Data before compression (1467594 Faces) and (c) Data after compression (183449 Faces)

For specimen N2, a 3D model was obtained using the EVA scanner, as shown in Fig. 2(a). Comparing Fig. 1(b) and Fig. 2(a), numerous green point clouds are visible along the carved edges in Fig. 2(a), highlighting data loss during the scanning process. Referring to Fig. 2(b), wood grain on the specimen’s surface is vaguely visible, whereas Fig. 1(c) lacks surface texture. This difference is due to variations in texture mapping between the two laser scanners. The 3D model file size for processed specimen N1 is 200 kB and can be displayed in a web format, allowing users to quickly browse and control the 3D model. The same processing method was applied to specimens N4 and N5, scanned using the HandySCAN.

(a)                                                                                       (b)

Fig. 2. (a) Model of specimen N2 after scanning and (b) Model of N2 specimen after manual processing

Analysis of CT scanning experiments

Due to similar grayscale output of the target objects and the interconnection of some targets in CT images of wooden specimens, it is suitable to apply the improved 3D TV-L1 algorithm. This image processing technique has been introduced by Zhao et al. (2021); thus, in this paper, a brief overview of the experimental process is provided.

To validate the effectiveness of this method, image segmentation experiments were conducted using MATLAB running on the Windows 7 operating system. A CT image data from wooden specimen N3 was selected, and the wooden model portion was extracted, resulting in a sequence of 690 images. As an example, the image processing was performed on the 155th original image data, as shown in Fig. 3(a).

(a)                                                       (b)                                                        (c)

Fig. 3. (a) 155th image of the original CT specimen, (b) 155th CT image after filtering, and (c) 3DTVL1 segmentation treated result

First, the N2 wooden model image was subjected to denoising and edge-preserving processing through bilateral filtering, resulting in Fig. 3(b). It is evident that the grayscale in the specimen area was uniform, and the organizational boundaries were clear and smooth. Subsequently, referring to Fig. 3(b), the 3D TV-L1 algorithm yielded in the separation of the wooden model and non-wooden model portions of the N3 specimen, as shown in Fig. 3(c).

(a)                                          (b)

Fig. 4. (a) Surface rendering result and (b) volume rendering result

Image 3D reconstruction involves two approaches: volume rendering and surface rendering. The first one involves processing a series of 2D images through segmentation techniques, such as boundary recognition, to reconstruct the 3D model of the inspected object, and present it on a surface projection (Zhao et al. 2021). The virtual display only requires the specimen appearance model, so it only needs surface rendering, as shown in Fig. 4(a). If virtual segmentation or non-destructive testing of the specimen’s interior from arbitrary angles and positions is required, it must be performed using volume rendering (Zhao et al. 2021), as shown in Fig. 4(b). 3D reconstruction is not only a necessary process for 3D printing but also facilitates data archiving. The reconstructed model can be imported into the 3DMAX software for virtual scene modeling.

Virtual exhibition based on laser and CT scanning 3D reconstruction

The conventional geometric modeling in 3DMAX requires lots of time for complex objects. It is also challenging for precise 3D reconstruction of irregular specimens with intricate details such as engravings and reliefs. To ensure that the exhibition effect is realistic to the actual objects, the 3DMAX tool is chosen to construct 3D exhibition scenes. This paper virtual display production steps can be divided into 3D modeling, map production, 3D roaming, and synthetic virtual video where the specific steps are proposed below.