Abstract
The wood furniture manufacturing industry continues in the direction of customized furniture. The analysis of color collocation is important for developing customized furniture. This study summarizes the common color collocation application area for porch cabinets. After selecting the appropriate color model, C # language was used to simulate a real scene setting experimental system in the Unity graphics engine. Colors were generated randomly in the corresponding area, and subjects evaluated the harmony. Then, the Python language was used to build the BP neural network model. The BP neural network models were trained using the Hue, Saturation, value (HSV) of the depicted colors and their corresponding scores. Finally, the evaluation model of color collocation and harmony was obtained. The model can be used to improve the prediction of color matching in customized furniture, which will promote enterprise productivity and industry development.
Download PDF
Full Article
Research on Color Matching Model for Wood Panel Furniture Based on a Back Propagation Neural Network
Yushu Chen and Jun Bian *
The wood furniture manufacturing industry continues in the direction of customized furniture. The analysis of color collocation is important for developing customized furniture. This study summarizes the common color collocation application area for porch cabinets. After selecting the appropriate color model, C # language was used to simulate a real scene setting experimental system in the Unity graphics engine. Colors were generated randomly in the corresponding area, and subjects evaluated the harmony. Then, the Python language was used to build the BP neural network model. The BP neural network models were trained using the Hue, Saturation, value (HSV) of the depicted colors and their corresponding scores. Finally, the evaluation model of color collocation and harmony was obtained. The model can be used to improve the prediction of color matching in customized furniture, which will promote enterprise productivity and industry development.
DOI: 10.15376/biores.19.2.2383-2403
Keywords: Wood panel furniture; Color matching model; BP neural network; Customized furniture
Contact information: College of Furnishings and Industrial Design, Nanjing Forestry University, Nanjing, Jiangsu Province, China 210037; *Corresponding author: pybj@njfu.edu.cn
INTRODUCTION
Wood has always been the most important material in furniture manufacturing, and its unique attributes such as safety, environmental friendliness, and aesthetics endow wooden furniture with irreplaceable advantages. There has been a continuous emergence of research on the production processes (Pakarinen 1999), material innovation (Muhammad Suandi et al. 2022), market expansion (Zhong et al. 2022), consumer preferences (Schuler et al. 2001), and other aspects related to wooden furniture (Guzel 2020). This is partly because with the advancement of relevant technological levels, the development and use of wood can be better applied to the furniture industry. Moreover, wooden furniture still occupies a dominant position in the current furniture material market, making it the primary choice for furniture materials.
Material innovation is a crucial focus in the research related to wooden furniture. The development of new types of materials is a key driver for advancing the wooden furniture industry. This primarily includes the development of environmentally friendly materials, composite coatings (Kim 2013), and composite materials (Song et al. 2016). Wooden composite materials refer to materials with desired characteristics that are bonded or glued together (Maloney 1996). These materials can be combined with advanced industrial prefabrication methods (Malmgren 2014), maximizing production efficiency, and meeting consumers’ diversified demands for wooden furniture. This, in turn, promotes the modernization of wooden furniture development.
In response to the demands of industry development, wooden furniture has gradually shifted towards the direction of customization. The furniture manufacturing industry faces various challenges such as low resource utilization efficiency, environmental pressure, and high labor costs. Smart manufacturing is an inevitable trend for the future of the furniture manufacturing industry. Current research on customized furniture includes aspects related to industrial production models (Wang et al. 2017), cost calculations (Ding et al. 2021), integration with new industries (Kurasova et al. 2021), and the transformation of production processes (Bumgardner and Nicholls 2020). Through methods such as process reengineering, cost control, and technological advancements, the development of customized furniture is promoted (Kodzi Jr et al. 2007). However, the fundamental purpose of customized furniture is to meet the continuously rising personalized demands of consumers. Among these, sensory value is a crucial factor influencing consumer purchasing tendencies (Wind and Rangaswamy 2001). All sensory and associative factors related to furniture contribute to the complete feeling of a piece of furniture.
Vision is the primary source for perceiving external objects, and sensory value significantly impacts shopping satisfaction. Visual perception is the most influential factor among sensory values affecting user purchases (Zhou et al. 2023). For example, when purchasing tiles, vision is the primary sense, followed by touch. Therefore, visual perception is a significant factor influencing customized furniture (Cachero-Martínez and Vázquez-Casielles 2017). Many customized furniture enterprises have started to consider color and texture as the primary core competitive strengths of their products (Walls 2013).
The contradiction between the economies of scale for producers and the personalized demands of consumers has led many custom furniture enterprises to seek an appropriate combination of standardized and non-standardized production (Artacho et al. 2022). This can be achieved through the means of product diversification to experiment with consumer preferences, simultaneously increasing both the research and development (R&D) risk and cost for enterprises. Against this backdrop, a significant number of custom furniture enterprises have emerged in the traditional furniture industry, with customization as their primary competitive advantage. As industry competition intensifies and the traditional incremental market gradually shifts towards a stock market, the dimensions of customization have expanded from material, size, and structure to include perceptual demands such as color, texture, and surface decoration.
Hence, many custom furniture enterprises have identified color and texture as the primary core competencies of their products. For instance, the internationally renowned custom furniture enterprise Egger launched a total of 566 patterns in 2021, including 162 solid colors and 404 textures, to meet consumers’ customized demands. This demonstrates that traditional custom furniture needs to broaden its product lines and innovate in color categories and combinations to enhance its industry competitiveness. Many domestic custom panel furniture enterprises are also making continuous efforts, focusing on whole-house customization, continuously innovating in color and texture, and gradually moving towards stylized and trendy development.
The characteristics of popular colors are novelty, fashion, and rapid changes, placing higher demands on the timeliness, directionality, and accuracy of color innovation in panel furniture. Therefore, the scientific analysis of color combinations is a crucial starting point for driving the development of custom furniture.
The development history of color models can be traced back to the 19th century. As early as 1861, Maxwell, based on his understanding of three primary colors, created the world’s first color photograph. His exploration and understanding of the theory of color mixing with three primary colors laid the foundation for the birth of color models. In 1905, Munsell developed the “Munsell color system,” which assigned precise values to colors, making it the first widely accepted color management system. This development further facilitated the emergence of the first color model.
As research progressed, the importance of color models as a crucial means of analyzing color continued to increase. This evolution gave rise to various types of color models, making them an indispensable aspect of color research. Foreign research on color models focuses on multi-level differentiation of color models, combining colors with other elements (such as texture) to introduce new hybrid models, and constructing new color models and their standardization based on existing color models. In China, research on color models has deepened gradually from theory to practical applications. This includes the analysis of characteristics of different color models, the development of conversion algorithms between models, and extending applications to various disciplines.
It is evident that color models are essential tools for systematically analyzing color and provide a crucial perspective for advancing color research. Therefore, in the furniture industry, the application of color models can be used to analyze the patterns of color usage, predict color development trends, and promote the scientific development of color in the furniture sector.
In the context of this research, the study begins by conducting preliminary investigations to identify common color combinations used in the setting of foyer cabinets. Subsequently, by selecting an appropriate color model, the research employs the C# programming language within the Unity graphics engine to simulate real-life scenarios and set up an experimental system. In the corresponding areas, colors are randomly generated, and participants are tasked with evaluating the harmony of these color combinations.
In existing studies, there are numerous research efforts on using BP neural networks for appearance evaluation (Wang et al. 2012). However, research on color harmony has not explored the use of BP neural network methods. In the realm of color and color recognition methods, the current focus primarily includes approaches based on co-occurrence matrices (Arvis et al. 2004), methods constructing color histograms (Cernadas et al. 2017), and deep learning methods relying on Convolutional Neural Networks (CNN) for analyzing color space information (Simon and Uma 2022), among others. Interestingly, there has been a lack of research combining these two approaches. Therefore, subsequent studies have a vast research space to explore.
Following this, a back propagation (BP) neural network model was constructed using the Python programming language. The BP neural network model was trained using Hue, Saturation, Value (HSV) values representing color and their corresponding evaluation scores. Ultimately, the research aimed to develop a model for evaluating the harmony of color combinations, thereby enhancing the efficiency of product design and development for enterprises.
EXPERIMENTAL
Technical Approach
HSV color model
The Hue-Saturation-Value (HSV) model is a three-dimensional coordinate system used to describe colors. Hue represents the fundamental color tone, typically represented in a circular manner, covering the entire color spectrum. Saturation indicates the intensity or vividness of a color. Higher saturation implies a more vibrant color, while lower saturation suggests a grayer appearance. Value represents the brightness or darkness of a color, ranging from black to white. The HSV model describes and controls color properties in a more natural way; it is particularly valuable in adjusting and analyzing color schemes. In this study, the HSV model was employed to assess the color combinations and harmony of customized cabinets, providing a valuable tool for further analysis.
Unity 3D
Unity3D is an open-source graphics engine widely used in gaming, modeling, and data visualization. The engine supports various model file formats. Unity3D can simulate real-world lighting and materials, allowing for the addition of additional decorations on models to mimic the real world. It also enables the accurate representation of the size and dimensions of furniture. All parameters within the engine are controllable, facilitating variable manipulation and ensuring that experiments are not affected by external factors.
Neural networks typically require large datasets, but collecting a substantial amount of data in the real world can be challenging. For instance, in this experiment, 15,000 data sets were collected. This implies the need to prepare 15,000 different colored cabinets in the real world, which is practically unfeasible.
Backpropagation neural network
The backpropagation (BP) neural network is a classic artificial neural network model utilized for simulating and solving complex nonlinear problems. The core characteristic of this model is its multi-layered neural structure, comprising an input layer, hidden layers, and an output layer. The learning process of the BP neural network is based on the principle of error back-propagation, where continuous adjustments to connection weights enable the network to approach the desired output. This network structure allows the BP neural network to excel in tasks such as pattern recognition, function approximation, classification, and regression.
In the field of color harmony, there are numerous established models for color harmony assessment. However, there is a paucity of research that combines neural networks with mathematical color spaces for color harmony analysis. In this study, a BP neural network is employed as an analytical tool, coupled with the use of the HSV color space to represent colors. This combination is utilized to simulate and train data pertaining to color combinations and harmonies. By inputting known data and target values into the neural network, the system learns how to generate expected outputs based on the input data. This approach establishes a data-driven method, enhancing the understanding and prediction of color combinations and harmonies for customized cabinets. The application of BP neural networks plays a crucial role in optimizing product design and improving the efficiency of enterprise research and development processes.
Due to the small dimensionality of the data in this experiment, with an average training time ranging from 10 to 20 minutes, the convergence speed of the model alone is not sufficient as a sole criterion for evaluation. Additionally, as the BP neural network in this experiment is solely employed to predict scores for color combinations, it is well-established that color matching is a subjective matter with varying opinions among individuals. However, empirical evidence suggests the existence of color combinations that a significant portion of the population finds aesthetically pleasing or unappealing. Therefore, an accuracy below 95% in the context of probability theory can be considered as nearly error-free accuracy.
In this work it is proposed that the model’s accuracy on the validation set, partitioned from the dataset, should exceed 85%. Simultaneously, in the validation process compared with the subjects, the overlap rate should not fall below 75%. This ensures a reasonable threshold for assessing the model’s performance, acknowledging the subjective nature of color preferences.
Customized Foyer Cabinet Color Harmony Evaluation Method
Analysis of common color combinations and layouts for customized foyer cabinets
To simulate the real usage scenarios of foyer cabinets, Python was utilized to crawl customized foyer cabinet images from search engines, with a time limit set within the past 5 years. A total of 861 photos of customized foyer cabinets were collected. The data collection process focused on the characteristics of customized furniture, where the vertical height changes are relatively small, but the horizontal width variations are significant. The number of cabinet doors often varies due to changes in the cabinet width, exhibiting a degree of uncertainty. However, the number of cabinet doors tends to remain consistent with changes in cabinet height, leading to a more deterministic vertical layout.
Based on this consideration, the horizontal number of cabinet doors for foyer cabinets was classified into the following layouts:
Layout 1: This category includes layouts with only 1 layer of cabinet doors (as shown in Fig. 1). A total of 143 images of this type were collected.
Layout 2: This classification encompasses layouts with only 3 layers of cabinet doors (as shown in Fig. 2). A total of 431 images of this type were collected.
Layout 3: This category includes layouts that combine 1 layer and 3 layers of cabinet doors (as shown in Fig. 3). A total of 222 images were collected in this classification.
Fig. 1. (a) The first, (b) second, and (c) third layout of foyer cabinets
In addition, there are other layouts, referring to those that do not fit into the three main layout types mentioned above. This category included a total of 65 images. Due to the relatively high randomness and small proportion of this layout, it was not considered as the focus of the study.
The purpose of this classification and data collection method is to ensure that one can construct real and diverse scenarios in our research, aiming to more accurately simulate the actual usage of different types of foyer cabinets. This step is taken to ensure that the study is sufficiently representative in terms of diversity and practicality, allowing for a more in-depth exploration and analysis of the color combinations and harmony of customized foyer cabinets.
Unity Experimental Scene Construction
The Unity engine operates on the principle that everything is a component. In the eyes of the Unity engine, a Unity project consists of two main parts: entities that have a tangible impact on the scene or field of view, referred to as “models,” and various components attached to these models. Some components are built-in Unity features, adjusted by parameter tuning, while others are developed by users, such as the component needed for automatic saving and color changing in this project.
To ensure the uniformity of the data used for the neural network, the software utilizes the EPPlus third-party library. This library simplifies Excel operations compared to commonly used Excel libraries, making it more user-friendly. However, it only operates with Excel files from 2007 onwards, i.e., “*.xlsx” files. Fortunately, Python has convenient libraries for reading such files.
To maintain the order of the Excel table across multiple software runs, a “roll” variable is introduced to store the next row to be modified in Excel. This variable’s value is stored in the data table, and it is modified each time user data is saved. Before each save, this value is read to ensure its timeliness.
Given potential interruptions in program execution during data collection, the design of automatically saving data upon program closure is not suitable for this project. Since the control during data collection essentially belongs to the participants in the experiment, efforts are made to minimize the participants’ actions. Adding a data save button to the interface is also less ideal. Based on the EPPlus characteristic of only occupying the file when opened and saved, this program chooses to save the data every time a participant clicks the score button. Due to the small codebase of this project, there is no concern about excessive time and performance overhead when frequently opening files.
The core code of this software is divided into two parts: the first part implements the random color code, and the second part implements the saving of RGB values and scores. These two parts are called by five functions, each corresponding to one of the five scoring buttons in the UI system. Since Unity itself is a loop function that runs at a constant speed based on physical frames, the user’s click on the score button immediately triggers the randomization of a new color and data recording. This loop continues until the program terminates.
Based on these technical methods, the experimental program is constructed, as shown in Fig. 2:
1. Build corresponding model scenes for Layout 1, Layout 2, and Layout 3.
2. Add an evaluation level on the right side of the screen, with ratings 1 (disharmonious), 2 (somewhat disharmonious), 3 (neutral), 4 (somewhat harmonious), and 5 (harmonious), allowing participants to evaluate the displayed color combinations’ harmony.
3. After the participant clicks on the chosen rating, the system automatically records the RGB values of the two-color tones and the harmony score in an .xlsx file. Then randomly assigns new colors to the cabinets, and the participant can proceed to the next evaluation. This process repeats. According to the above ideas, the program flowchart is designed as shown in Fig. 3.
Fig. 2. UNITY experimental model interface
Fig. 3. Unity3D Program flow chart
BP Neural network evaluation model construction
To ensure that the neural network can generate a reasonable model within a reasonable time, it is necessary to determine the relevant parameters of the selected model before conducting the experiment.
The parameters of the BP neural network can be divided into two main types. The first type includes parameters randomly set during the initialization process of the neural network, such as the weight matrix and bias vector for transmitting information. These parameters are commonly referred to as conventional parameters, abbreviated as “parameters.” The second type of parameter includes those that need to be manually set by the experimenter before initialization. Even if these parameters are not explicitly set in the code, the program will automatically allocate default values. These parameters include training iteration times, target accuracy, activation functions, loss functions, etc. These parameters are referred to as “hyperparameters.”
Determining normalization and de-normalization methods
To ensure data density and avoid significant differences between data points that may affect training results, it is common practice to normalize the data before inputting it into the neural network’s input layer. Essentially, this process compresses data that is dispersed within a large range (in this study, 0 to 255) into a smaller range, such as 0 to 1 or -1 to 1.
The formula for the min-max normalization method is as follows:
(1)
where x* is the normalized data, min is the minimum value in the dataset, and max is the maximum value in the dataset. It is evident that the numerator of this formula must be less than or equal to the denominator, ensuring that the result of the formula is always a decimal between 0 and 1.
Because the results obtained from the neural network are based on normalized data, for the sake of data readability, it is common to use a de-normalization function to retrieve the original data after obtaining results. In many models, the de-normalization function used is the inverse function of the normalization function. The de-normalization function for the min-max method mentioned above is:
(2)
Determining the activation function
The activation function is one of the most crucial components of a neural network, as it imparts non-linear characteristics to the network, enabling it to approximate any function. From the principles of neural networks, it is evident that without a non-linear activation function, the output of each layer would be a linear function of the input from the previous layer. A linear activation function would only add a layer of complexity to the linear combination, essentially making it equivalent to having no activation function at all.
Commonly used activation functions in neural networks are categorized into saturating and non-saturating functions, based on whether they tend to zero as the input approaches positive or negative infinity. Sigmoid and tanh functions are examples of saturating activation functions, while ReLU function and its variants are examples of non-saturating activation functions.
The formula for the sigmoid function is as follows:
(3)
The formula for the tanh function is as follows:
(4)
The formula for the ReLU function is as follows:
(5)
In this experiment, the HSV color values are positive integers, and their normalized values are all above 0. Thus, it can be observed that within the non-saturating activation functions, the ReLU function and its variants are equivalent. For this discussion, only the ReLU function is selected. The sigmoid function, due to various drawbacks, has been replaced by the tanh function in a wide range and is discussed here as well.
From a computational perspective, since the derivative of the tanh function is much more complex than that of the ReLU function, theoretically, a neural network using the ReLU function as the activation function trains faster than using the tanh function.
As the derivative of the ReLU function is constantly 1 when the input is greater than zero, it can easily lead to a phenomenon where the gradient is constant, making it difficult for the loss to converge in this specific context.
The design of the controlled experiment is as follows: using the same data, the same neural network structure, the same mean squared error (MSE) loss function, the same learning rate, and different activation functions, compute the average loss every 10 training steps. The training is conducted for two thousand steps, and the average loss is compared.
The experimental results are shown in Figs. 4 through 6. These observations indicate that, under the same conditions of loss function, training iterations, and parameters such as learning rate, the tanh function performs the best as the activation function.
Fig. 4. The loss curve with the ReLU activation function
Fig. 5. The loss curve with the sigmoid activation function
Fig. 6. The loss curve with the tanh activation function
Loss function
In linear regression problems, the commonly used loss function is the Mean Squared Error (MSE) function, mathematically defined as follows:
(6)
The characteristics of this function are that it is easily influenced by outliers. At the same time, the curve of this function is smooth, and it is continuously differentiable, making it suitable for use with the gradient descent algorithm. Additionally, due to its squaring property, when the difference between the actual value and the predicted value is less than 1, the function will reduce the error. Since the data in this project is normalized to the range of 0 to 1, this loss function will certainly reduce the error.
Another commonly used loss function is the Mean Absolute Error (MAE) function. The formula for MAE is as follows:
(7)
This function calculates the mean of the absolute differences between the target values and the predicted values. Its advantage lies in being less sensitive to outliers, and it does not shrink the error.
Comparing the two formulas, MSE calculates the average after squaring the errors, while MAE directly uses the absolute values of the errors. Since all the data in this experiment are normalized to the range of 0 to 1, squaring the errors theoretically reduces their magnitude. Therefore, using MSE as the loss function may not effectively reflect the errors between predicted values and true values in this experiment. Also, due to the squaring property of the MSE function, it is sensitive to outliers as the errors of outliers, which are often larger than non-outliers, get amplified.
A comparative experiment was designed using the same data and hyperparameters, only modifying the loss function. The average loss was calculated every 10 training iterations, and the comparison of average loss was made after 2000 iterations.
The line chart showing the predicted values versus true values using MSE as the loss function:
Fig. 7. Line chart of predicted values versus actual values using MSE as the loss function
Fig. 8. Line chart of predicted values versus actual values using MAE as the loss function.
From the experimental results, it is evident that when using MSE as the loss function, the accuracy is acceptable for a moderate number of intermediate scores, but the performance is poor in predicting fewer high or low scores. On the other hand, when using MAE as the loss function, the predictions for high or low scores are also accurate.
Other hyperparameters
Other hyperparameters also include the number of training iterations, learning rate, etc. There is no established rule for these hyperparameters, and their optimal values can only be determined through repeated trials. To explore an optimal solution, a controlled experiment is designed where, apart from learning rate, all other hyperparameters are kept constant. By comparing the average loss after 2000 iterations, an appropriate learning rate can be determined.
In addition, the number of layers and the number of neurons in each layer are crucial hyperparameters that significantly impact the experiment. Unfortunately, there is currently no comprehensive mathematical theory explaining how the number of layers and neurons per layer affect the experimental results. Only an empirical formula is available:
(8)
Here, h represents the number of nodes in the hidden layer, m is the number of nodes in the input layer, n is the number of nodes in the output layer, and a is an integer between 1 and 10. After extensive controlled experiments, it has been established that this formula is not suitable for the current experiment. The final decision was made to adopt a neural network structure with four hidden layers, as detailed in the following sections.
RESULTS AND DISCUSSION
Experimental Implementation and Data Analysis
Participant selection and experimental setup
Twenty participants were selected for the experiment. These participants were instructed to rate the color harmony of cabinet layouts in three different scenes constructed in the Unity environment. Ratings ranged from disharmonious (1 point) to somewhat disharmonious (2 points), neutral (3 points), somewhat harmonious (4 points), and harmonious (5 points). To prevent visual fatigue, each participant was limited to 500 evaluations for each layout scene. After completing the evaluations for a particular scene, participants were required to take a break of at least 30 minutes before proceeding to evaluate the color harmony of the next layout scene.
Table 1. Color Coordination Harmony Rating for Layout 1
Table 2. Color Coordination Harmony Rating for Layout 2
Table 3. Color Coordination Harmony Rating for Layout 3
Due to space constraints, it is challenging to visualize all 15,000 data points divided into three groups and to present the entire dataset in the article. Therefore, only the first 10 data points for each of the three layouts were showcased in Tables 1 through 3.
Data processing and results analysis
After completing the experiment, preprocessing was applied to the obtained RGB color values and their corresponding scores. Subsequently, the data underwent normalization, resulting in the dataset presented in Table 2, which shows partial data. As the data normalization process is consistent across the three layouts, only the processed data for Layout 1 will be presented.
Table 4. Normalized Data for Layout 1
Implementation of BP Neural Network based on the Above Experimental Results using Python
As depicted in Fig. 9, the network consists of four hidden layers. The first hidden layer comprises 64 neurons, followed by the second hidden layer with 32 neurons, the third hidden layer with 16 neurons, and the final hidden layer with 8 neurons. Given that the input data to the network is a 6-dimensional vector, there are six neurons in the input layer. Similarly, as the network is designed to output a scalar for predicting scores, the output layer consists of a single neuron.
These data were input into the constructed BP neural network evaluation model for fitting, aiming to establish a model describing the color harmony of cabinet combinations. The BP neural network was trained using the 15,000 data points collected during the experiment. A random subset of 3,000 data points was used as the validation dataset, and the remaining 12,000 data points were used for training. The comparison of expected values and error values is presented in Fig. 10.
Fig. 9. BP neural network structure diagram
Fig. 10. Line chart of predicted values vs. actual values for the BP neural network after determining hyperparameters
The computed accuracy of the neural network on the test dataset, randomly split during the initialization with the dataset, is 87.9%. This suggests that the network model exhibits good generalization capabilities, providing valuable insights for color coordination.
Fig. 11. Loss curve of BP neural network during training with determined hyperparameters
Experimental result verification and analysis
In this section, the experimental results are subjected to thorough validation and analysis to assess the performance of the BP neural network model and its effectiveness in predicting the harmony of cabinet color combinations. This step is crucial to confirming the accuracy and reliability of the model, supporting the final research conclusions.
Specific Validation Steps:
1. Prepare Experimental Materials: Randomly select 8 different-colored monochrome wooden board samples, denoted as c1 to c8, black baseboard, inspection lightbox, spectrophotometer, etc.
2. Set Up Experimental Environment: To mitigate the impact of ambient light on the experiment, place the monochrome wooden board samples in the inspection lightbox, set the color temperature to 4500K, and position the wooden board samples on a black baseboard to avoid interference from other colors.
3. Pair the Samples: Randomly pair the 8 different-colored monochrome wooden board samples to create multiple sets of color combinations (as shown in Fig. 12).
4. Participant Selection: Twenty participants were chosen for the experiment. They were instructed to rate the color harmony between two different colors on a scale of 1-5, with 1 indicating low harmony and 5 indicating high harmony. Each color combination was evaluated by 10 participants, and the scoring results were recorded.
The 20 participants selected for this study are all individuals engaged in the furniture industry, with a certain level of experience in furniture color matching. This is beneficial for providing effective data for the simulation. Additionally, to validate the reasonableness of the data, after the model was established, 5 volunteers were randomly selected to participate in the experiment. No biased data were observed, thereby theoretically achieving saturation.
Table 3 has recorded various color combinations, denoted by combinations of color board labels (e.g., c1+c2 represents the combination of color boards c1 and c2). The final row calculates the mode of the color combination as the experimental result.
Fig. 12. Color palette reference chart for validation
5. Data Processing: As this model is designed to predict the majority opinion on color combinations, the mode is calculated for the rating results of each set of color combinations to derive the final score.
Fig. 13. Schematic diagram of the positioning for obtaining RGB values from the color palette
6. Acquisition of Sample HSV Values: Using a spectrophotometer at five points on the wooden board (points 1 to 5 as shown in figure 13), obtain the RGB values of the samples. Convert these RGB values to HSV values, and subsequently calculate the average HSV values of the five points to represent the final color of the sample.
Table 3. Data on the Scoring of Color Palette Combinations by Test Subjects
In Table 3, each row represents scores given by a participant, and each column corresponds to different color combinations.
Table 4. RGB Values Obtained from Color Picking