3 Star 1 Fork 0

惠超/gmp2024_vue

加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
文件
该仓库未声明开源许可证文件(LICENSE),使用请关注具体项目描述及其代码上游依赖。
克隆/下载
accepted papers (id, title, abstract, authors)-Accept.csv 60.17 KB
一键复制 编辑 原始数据 按行查看 历史
惠超 提交于 2024-05-11 23:31 . update alomost
"Please update Title/Abstract/Authors/Reporter
An example for Authors: Jack Zhang, David Scholze",,,,
ID,Title,Abstract,Authors,Reporter
paper1001,M-NeuS: volume rendering based surface reconstruction and material estimation,"Although significant advances have been made in the field of multi-view 3D reconstruction using implicit neural field-based methods, existing reconstruction methods overlook the estimation of the material information (e.g. the base color, albedo, roughness, and metallic) during the learning process. In this paper, we propose a novel differentiable rendering framework, named as material NueS (M-NeuS), for simultaneously achieving precise surface reconstruction and competitive material estimation. For surface reconstruction, we perform multi-view geometry optimization by proposing an enhanced-low-to-high frequency encoding registration strategy (EFERS) and a second-order interpolated signed distance function (SI-SDF) for precise details and outline reconstruction. For material estimation, inspired by the NeuS, we first propose a volume-rendering-based material estimation strategy (VMES) to estimate the base color, albedo, roughness, and metallic accurately. And then, different from most material estimation methods that need ground-truth geometric priors, we use the geometry information reconstructed in the surface reconstruction stage and the directions of incidence from different viewpoints to model a neural light field, which can extract the lighting information from image observations. Next, the extracted lighting and the estimated base color, albedo, roughness, and metallic are optimized by the physics-based rendering equation. Extensive experiments demonstrate that our M-NeuS can not only reconstruct more precise geometry surface than existing state-of-the-art (SOTA) reconstruction methods but also can estimate competitive material information: the base color, albedo, roughness, and metallic.","Shu Tang, Jiabin He, Shuli Yang,  Xu Gong, Hongxing Qin",Jiabin He
paper1003,D3Former: Jointly Learning Repeatable Dense Detectors and Feature-Enhanced Descriptors via Saliency-guided Transformer,"Establishing accurate and representative matches is a crucial step in addressing the point cloud registration problem. A commonly employed approach involves detecting keypoints with salient geometric features and subsequently mapping these keypoints from one frame of the point cloud to another. However, methods within this category are hampered by the repeatability of the sampled keypoints. In this paper, we introduce a saliency-guided transformer, referred to as D3Former, which entails the joint learning of repeatable Dense Detectors and feature-enhanced Descriptors. The model comprises a Feature Enhancement Descriptor Learning (FEDL) module and a Repetitive Keypoints Detector Learning (RKDL) module. The FEDL module utilizes a region attention mechanism to enhance feature distinctiveness, while the RKDL module focuses on detecting repeatable keypoints to enhance matching capabilities. Extensive experimental results on challenging indoor and outdoor benchmarks demonstrate that our proposed method consistently outperforms state-of-the-art point cloud matching methods. Notably, tests on 3DLoMatch, even with a low overlap ratio, show that our method consistently outperforms recently published approaches such as RoReg and RoITr. For instance, with the number of extracted keypoints reduced to 250, the registration recall scores for RoReg, RoITr, and our method are 64.3%, 73.6%, and 76.4%, respectively.","Junjie Gao, Pengfei Wang, Qiujie Dong, Qiong Zeng, Shiqing Xin, Caiming Zhang",Junjie Gao
paper1004,Feature-preserving Shrink Wrapping with Adaptive Alpha,"Recent advancements in shrink-wrapping-based mesh approximation have shown tremendous advantages for non-manifold defective meshes. However, these methods perform unsatisfactorily in maintaining the regions with sharp features and rich details of the input mesh. We propose an adaptive shrink-wrapping method based on the recent alpha wrapping technique, offering improved feature preservation while handling defective input meshes. The proposed approach comprises three main components. First, we compute a new sizing field with the capability to assess the discretization density of non-manifold defective meshes. Then, we generate a mesh feature skeleton by projecting input feature lines onto the offset surface, ensuring the preservation of sharp features. Finally, an adaptive wrapping approach based on normal projection is applied to preserve the regions with sharp features and rich details simultaneously. By conducting tests on datasets including Thingi10k, ABC, and GrabCAD, we demonstrate that our method exhibits significant improvements in mesh fidelity compared to the alpha wrapping method, while maintaining the advantage of manifold property inherited from shrink-wrapping methods.","Jiayi Dai, Yiqun Wang, Dong-Ming Yan",Jiayi Dai
paper1005,Computing Nodes for Plane Data Points by Constructing Cubic Polynomial with Constraints,"To construct a parametric polynomial curve for interpolating a set of data points, the interpolation accuracy and shape of the constructed curve are influenced by two principal factors: parameterization of the data points (computing a node for each data point) and interpolation method. A new method of computing nodes for a set of data points was proposed. Through this method, the functional relationship between data points and corresponding nodes in cubic polynomials was established. Using this functional relationship, a functional cubic polynomial with one degree of freedom can pass through four adjacent data points. The degree of the freedom can be represented by two adjacent node intervals, which is affine-invariant and can be obtained by minimizing the cubic terms of the cubic polynomial. Since each node is computed in different node spaces, a method for constructing a quadratic curve is presented, which transforms all the quadratic curves into a unified form to compute nodes. Nodes computed using the new method exhibit quadratic polynomial precision, i.e., if the set of data point is taken from a quadratic polynomial F(t), the nodes by the new method are used to construct a interpolation curve, an interpolation method reproducing quadratic polynomial gives quadratic polynomial F(t). The primary advantage of the proposed method is that the constructed curve has a shape described by data points. Another advantage of the new method is that the nodes computed by it have affine invariance. The experimental results indicate that the curve constructed by the nodes using the new method has a better interpolation accuracy and shape compared to that constructed using other methods.","Hua Wang, Fan Zhang",Fan Zhang
paper1006,Asynchronous progressive iterative approximation method for least squares fitting,"For large data fitting, the least squares progressive iterative approximation (LSPIA) methods have been proposed by Lin et al. (SIAM Journal on Scientific Computing, 2013, 35(6):A3052-A3068) and Deng et al. (Computer-Aided Design, 2014, 47:32-44), in which a constant step size is used. In this paper, we further accelerate the LSPIA method in terms of a Chebyshev semi-iterative scheme and present an asynchronous LSPIA (denoted by ALSPIA) method. The control points in ALSPIA are updated by using an extrapolated variant in which an adaptive step size is chosen according to the roots of Chebyshev polynomial. Our convergence analysis shows that ALSPIA is faster than the original LSPIA method in both singular and non-singular least squares fitting cases. Numerical examples show that the proposed algorithm is feasible and effective.","Chengzhi Liu, Nian-Ci Wu, Juncheng Li, Lijuan Hu",Chengzhi Liu
paper1007,Anisotropic Triangular Meshing using Metric-adapted Embeddings,"We propose a novel method to generate high-quality triangular meshes with specified anisotropy. Central to our algorithm is to present metric-adapted embeddings for converting the anisotropic meshing problem to an isotropic meshing problem with constant density. Moreover, the orientation of the input Riemannian metric forms a field, enabling us to use field-based meshing techniques to penalize obtuse angles and improve regularity. We use the cone singularities generated to adapt to the input Riemannian metric to achieve such metric-adapted embeddings. We demonstrate the feasibility and effectiveness of our method over various models. Compared to other state-of-the-art methods, our method possesses much higher quality.","Yueqing Dai,Jian-Ping Su,Xiao-Ming Fu",Yueqing Dai
paper1011,Flipping-based Iterative Surface Reconstruction for Unoriented Points,"In this paper, we propose a novel surface reconstruction method for unoriented points by establishing and solving a nonlinear equation system. By treating normals as unknown parameters and imposing the conditions that the implicit field is constant and its gradients parallel to the normals on the input point cloud, we establish a nonlinear equation system involving the oriented normals. To simplify the system, we transform it into a 0-1 integer programming problem solely focusing on orientation by incorporating inconsistent oriented normal information through PCA. We solve the simplified problem using flipping-based iterative algorithms and propose two novel criteria for flipping based on theoretical analysis. Extensive experiments on renowned datasets demonstrate that our flipping-based method with wavelet surface reconstruction achieves state-of-the-art results in orientation and reconstruction. Furthermore, it exhibits linear computational and storage complexity by leveraging the orthogonality and compact support properties of wavelet bases.","Yueji Ma,Yanzun Meng,Dong Xiao,Zuoqiang Shi,Bin Wang",Yueji Ma
paper1012,Evolutionary Multi-objective High-order Tetrahedral Mesh Optimization,"High-order mesh optimization has many goals, such as improving smoothness, reducing approximation error, and improving mesh quality. The previous methods do not optimize these objectives together, resulting in suboptimal results. To this end, we propose a multi-objective optimization method for high-order meshes. Central to our algorithm is using the multi-objective genetic algorithm (MOGA) to adapt to the multiple optimization objectives. Specifically, we optimize each control point one by one, where the MOGA is applied. We demonstrate our method's feasibility and effectiveness over various models. Compared to other state-of-the-art methods, our method achieves a favorable trade-off between multiple objectives.","Yang Ji, Shibo Liu, Jia-Peng Guo, Jian-Ping Su, Xiao-Ming Fu",Yang Ji
paper1014,High-order Shape Interpolation,"We propose a simple yet effective method to interpolate high-order meshes. Given two manifold high-order triangular (or tetrahedral) meshes with identical connectivity, our goal is to generate a continuum of curved shapes with as little distortion as possible in the mapping from the source mesh to the interpolated mesh. Our algorithm contains two steps: (1) linearly blend the pullback metric of the identity mapping and the input mapping between two B\'{e}zier elements on a set of sampling points; (2) project the interpolated metric pullback into the metric space between B\'{e}zier elements using the Newton method for nonlinear optimization. We demonstrate the feasibility and practicability of the method for high-order meshes through extensive experiments in both 2D and 3D.","Zhaobin Huang, Shibo Liu, Xiao-Ming Fu",Zhaobin Huang
paper1017,Two novel Iterative Approaches for Improved LSPIA Convergence,"This paper introduces two enhanced variants of the least squares progressive-iterative approximation (LSPIA) by leveraging momentum techniques. Specifically, we present LSPIA with Polyak's heavy ball momentum (PmLSPIA) and LSPIA with Nesterov's momentum (NmLSPIA). These methods preserve the directions of preceding steps during iterations. The introduction of momentum enhances the determination of the search direction, leading to a notable acceleration in convergence rates. The geometric interpretations of PmLSPIA and NmLSPIA are elucidated, providing insights into the underlying principles of these accelerated algorithms. Rigorous convergence analyses are conducted, revealing that both PmLSPIA and NmLSPIA exhibit faster convergence compared to the conventional LSPIA method. Numerical results further validate the efficacy of the proposed algorithms in significantly accelerating the convergence of LSPIA.","Chengzhi Liu, Nian-Ci Wu",Chengzhi Liu
paper1021,Fast Parameterization of Planar Domains for Isogeometric Analysis via Generalization of Deep Neural Network,"One prominent step in isogeometric analysis (IGA) is known as domain parameterization, that is, finding a parametric spline representation for a computational domain. Typically, domain parameterization is divided into two separate steps: identifying an appropriate boundary correspondence and then parameterizing the interior region. However, this separation significantly degrades the quality of the parameterization. To attain high-quality parameterization, it is necessary to optimize both the boundary correspondence and the interior mapping simultaneously, referred to as integral parameterization. In a prior research, an integral parameterization approach for planar domains based on neural networks was introduced. One limitation of this approach is that the neural network has no ability of generalization, that is, a network has to be trained to obtain a parameterization for each specific computational domain. In this article, we propose an efficient enhancement over this work, and we train a network which has the capacity of generalization---once the network is trained, a parameterization can be immediately obtained for each specific computational via evaluating the network. The new network greatly speeds up the parameterization process by two orders of magnitudes. We evaluate the performance of the new network on the MPEG data set and a self-design data set, and experimental results demonstrate the superiority of our algorithm compared to state-of-the-art parameterization methods.","Zheng Zhan, Wenping Wang, Falai Chen",Zheng Zhan
paper1023,A new stable method to compute mean value coordinates,"The generalization of barycentric coordinates to arbitrary simple polygons with more than three vertices has been a subject of study for a long time. Among the different constructions proposed, mean value coordinates have emerged as a popular choice, particularly due to their suitability for the non-convex setting. Since their introduction, they have found applications in numerous fields, and several equivalent formulas for their evaluation have been presented in the literature. However, so far, there has been no study regarding their numerical stability. In this paper, we aim to investigate the numerical stability of the algorithms that compute mean value coordinates. We empirically show that all the known methods exhibit instability in some regions of the domain. To address this problem, we introduce a new formula for computing mean value coordinates, explain how to implement it, and formally prove that our new algorithm provides a stable evaluation of mean value coordinates. We validate our results through numerical experiments.","Chiara Fuda, Kai Hormann",Chiara Fuda
paper1024,Skeleton based tetrahedralization of surface meshes,"We propose a new method for generating tetrahedralizations for 3D surface meshes. The method builds upon a segmentation of the mesh that forms a rooted skeleton structure. Each segment in the structure is fitted with a stamp - a predefined basic shape with a regular and well-defined topology. After molding each stamp to the shape of the segment it is assigned to, we connect the segments with a layer of tetrahedra using a new approach to stitching two triangulated surfaces with tetrahedra. Our method not only generates a tetrahedralization with regular topology mimicking a bone-like structure with tissue being grouped around it, but also achieves running times that would allow for real time usages. The running time of the method is closely correlated with the density of the input mesh which allows for controlling the expected time by decreasing the vertex count while still preserving the general shape of the object. The algorithm is also very forgiving when it comes to typical mesh modeling errors like duplicate vertices, self intersections and holes since the main bulk of our method treats the input 3D mesh as a point cloud.","Aleksander Płocharski, Joanna Porter-Sobieraj, Andrzej Lamecki, Tomasz Herman, Andrzej Uszakow",Aleksander Płocharski
paper1027,VQ-CAD: Computer-Aided Design Model Generation with Vector Quantized Diffusion,"Computer-Aided Design (CAD) software remains a pivotal tool in modern engineering and manufacturing, driving the design of a diverse range of products. In this work, we introduce VQ-CAD, the first CAD generation model based on Denoising Diffusion Probabilistic Models. This model utilizes a vector quantized diffusion model, employing multiple hierarchical codebooks generated through VQ-VAE. This integration not only offers a novel perspective on CAD model generation but also achieves state-of-the-art performance in 3D CAD model creation in a fully automatic fashion. Our model is able to recognize and incorporate implicit design constraints by simply forgoing traditional data augmentation. Furthermore, by melding our approach with CLIP, we significantly simplify the existing design process, directly generate CAD command sequences from initial design concepts represented by text or sketches, capture design intentions, and ensure designs adhere to implicit constraints.","Hanxiao Wang, Mingyang Zhao, Yiqun Wang , Weize Quan, Dong-Ming Yan",Hanxiao Wang
paper1034,Planar quartic G2 Hermite interpolation for curve modeling,"We study planar quartic $G^2$ Hermite interpolation, that is, a quartic polynomial curve interpolating two planar data points along with the associated tangent directions and curvatures. When the two specified tangent directions are not parallel, a quartic B\'{e}zier curve interpolating such $G^2$ data is constructed using two geometrically meaningful shape parameters which denote the magnitudes of end tangent vectors. We then determine the two parameters by minimizing a quadratic energy functional or curvature variation energy. When the two specified tangent directions are parallel, a quartic $G^2$ interpolating curve exists only when an additional condition on $G^2$ data is satisfied, and we propose a modified optimization approach. Finally, we demonstrate the achievable quality with a range of examples and the application to curve modeling, and it allows to locally create $G^2$ smooth complex shapes. Compared with the existing quartic interpolation scheme, our method can generate more satisfactory results in terms of approximation accuracy and curvature profiles.","Angyan Li, Lizheng Lu, Kesheng Wang",Angyan Li
paper1035,Interactive Reverse Engineering of CAD Models,"Reverse engineering Computer-Aided Design (CAD) models based on the original geometry is a valuable and challenging research problem that has numerous applications across various tasks. However, previous approaches have often relied on excessive manual interaction, leading to limitations in reconstruction speed. To mitigate this issue, in this study, we approach the reconstruction of a CAD model by sequentially constructing geometric primitives (such as vertices, edges, loops, and faces) and performing Boolean operations on the generated CAD modules. We address the complex reconstruction problem in four main steps. Firstly, we use a plane to cut the input mesh model and attain a loop cutting line, ensuring accurate normals. Secondly, the cutting line is automatically fitted to edges using primitive information and connected to form a primitive loop. This eliminates the need for time-consuming manual selection of each endpoint and significantly accelerates the reconstruction process. Subsequently, we construct the loop of primitives as a chunked CAD model through a series of CAD modeling operations, including extruding, lofting, revolving, and sweeping. Our approach incorporates an automatic height detection mechanism to minimize errors that may arise from manual designation of the extrusion height. Finally, by merging Boolean operations, these CAD models are assembled together to closely approximate the target geometry. We conduct a comprehensive evaluation of our algorithm using a diverse range of CAD models from both the Thingi10K dataset and real-world scans. The results validate that our method consistently delivers accurate, efficient, and robust reconstruction outcomes while minimizing the need for manual interactions. Furthermore, our approach demonstrates superior performance compared to competing methods, especially when applied to intricate geometries.","Zhenyu Zhang, Mingyang Zhao, Zeyu Shen, Yuqing Wang, Xiaohong Jia, Dong-Ming Yan",Zhenyu Zhang
paper1040,An attention enhanced dual graph neural network for mesh denoising,"Mesh denoising is a crucial research topic in geometric processing, as it is widely used in reverse engineering and 3D modeling. The main objective of denoising is to eliminate noise while preserving sharp features. In this paper, we propose a novel denoising method called Attention Enhanced Dual Mesh Denoise (ADMD), which is based on a graph neural network and attention mechanism. ADMD simulates the two-stage denoising method by using a new training strategy and total variation (TV) regular term to enhance feature retention. Our experiments have demonstrated that ADMD can achieve competitive or superior results to state-of-the-art methods for noise CAD models, non-CAD models, and real-scanned data. Moreover, our method can effectively handle large mesh models with different-scale noisy situations and prevent model shrinking after mesh denoising.","Mengxing Wang, Yi-Fei Feng, Bowen Lyu, Li-Yong Shen and Chun-Ming Yuan",Mengxing Wang
paper1042,Point-StyleGAN: Multi-scale Point Cloud Synthesis with Style,"A point cloud is a set of discrete surface samples. As the simplest 3D representation, it is widely used in 3D reconstruction and perception. Yet developing a generative model for point clouds remains challenging due to the sparsity and irregularity of points. Drawn by StyleGAN, the forefront image generation model, this paper presents Point-StyleGAN, a generator adapted from StyleGAN2 architecture for point cloud synthesis. Specifically, we replace all the 2D convolutions with 1D ones and introduce a series of multi-resolution discriminators to overcome the under-constrained issue caused by the sparsity of points. We further add a metric learning-based loss to improve generation diversity. Besides the generation task, we show several applications based on GAN inversion, among which an inversion encoder Point-pSp is designed and applied to point cloud reconstruction, completion, and interpolation. To our best knowledge, Point-pSp is the first inversion encoder for point cloud embedding in the latent space of GANs. The comparisons to prior work and the applications of GAN inversion demonstrate the advantages of our method. We believe the potential brought by the Point-StyleGAN architecture would further inspire massive follow-up works in the future.","Yang Zhou, Cheng Xu, Zhiqiang Lin, Xinwei He, Hui Huang",Yang Zhou
paper1044,An adaptive collocation method on implicit domains using weighted extended THB-splines,"Implicit representations possess many merits when dealing with geometries with certain properties, such as small holes, reentrant corners and other complex details. Truncated hierarchical B-splines (THB-splines) has recently emerged as a novel tool in many fields including design and analysis due to its local refinement ability. In this paper, we propose an adaptive collocation method with weighted extended THB-splines (WETHB-splines) on implicit domains. We modify the classification strategy for the WETHB-basis, and the centers of the supports of inner THB-splines on each level are chosen to be collocation points. We also use weighted collocation in the transition regions, in order to enrich information concerning the hierarchical basis. In contrast to the traditional WEB-collocation method, the proposed approach possesses much higher convergence rate. To show the efficiency and superiority of the proposed method, numerical examples in two and three dimensions are performed to solve Poisson’s equations.","Jingjing Yang, Chun-gang Zhu
",Jingjing Yang
paper1050,Automatic Tooth Arrangement with Joint Features of Point and Mesh Representations via Diffusion Probabilistic Models,"Tooth arrangement is a crucial step in orthodontics treatment, in which aligning teeth could improve overall well-being, enhance facial aesthetics, and boost self-confidence. To improve the efficiency of tooth arrangement and minimize errors associated with unreasonable designs by inexperienced practitioners, some deep learning-based tooth arrangement methods have been proposed. Currently, most existing approaches employ MLPs to model the nonlinear relationship between tooth features and transformation matrices to achieve tooth arrangement automatically. However, the limited datasets (which to our knowledge, have not been made public) collected from clinical practice constrain the applicability of existing methods, making them inadequate for addressing diverse malocclusion issues. To address this challenge, we propose a general tooth arrangement neural network based on the diffusion probabilistic model. Conditioned on the features extracted from the dental model, the diffusion probabilistic model can learn the distribution of teeth transformation matrices from malocclusion to normal occlusion by gradually denoising from a random variable, thus more adeptly managing real orthodontic data. To take full advantage of effective features, we exploit both mesh and point cloud representations by designing different encoding networks to extract the tooth (local) and jaw (global) features, respectively. In addition to traditional metrics ADD, PA-ADD, CSA, and ME_rot, we propose a new evaluation metric based on dental arch curves to judge whether the generated teeth meet the individual normal occlusion. Experimental results demonstrate that our proposed method achieves state-of-the-art tooth alignment results and satisfactory occlusal relationships between dental arches. We will publish the code and dataset.","Changsong Lei, Mengfei Xia, Shaofeng Wang, Yaqian Liang, Ran Yi, Yu-Hui Wen, Yong-Jin Liu",Yaqian Liang
paper1070,Real-time Collision Detection between General SDFs,"Signed Distance Fields (SDFs) have found widespread utility in collision detection applications due to their superior query efficiency and ability to represent continuous geometries. However, little attention has been paid to calculating the intersection of two arbitrary SDFs. In this paper, we propose a novel, accurate, and real-time approach for SDF-based collision detection between two solids, both represented as SDFs. Our primary strategy entails using interval calculations and the SDF gradient to guide the search for intersection points within the geometry. For arbitrary objects, we take inspiration from existing collision detection pipelines and segment the two SDFs into multiple parts with bounding volumes. Once potential collisions between two parts are identified, our method quickly computes comprehensive intersection information such as penetration depth, contact points, and contact normals. Our method is general in that it accepts both continuous and discrete SDF representations. Experiment results show that our method can detect collisions in high-precision models in real time, highlighting its potential for a wide range of applications in computer graphics and virtual reality.","Pengfei Liu, Yuqing Zhang, He Wang, Milo K. Yip, Elvis S. Liu Xiaogang Jin",Pengfei Liu
paper1081,Automated Placement of Dental Attachments Based on Orthodontic Pathways,"The aesthetic appeal and removability of clear aligners have led to their widespread popularity in orthodontic treatments. Dental attachments significantly contribute to shortening treatment duration and enhancing orthodontic outcomes. The automation of tailor-made dental attachments for individual teeth plays a crucial role in the field of orthodontics. This is because they enable more precise control over the forces applied, thereby effectively facilitating tooth movement. This study introduces an automated algorithm that generates dental attachments based on the orthodontic path. The algorithm automatically selects and places the appropriate type of attachment according to the magnitude of rotation and translation of teeth during orthodontic procedures. It adjusts the position and posture of the attachments to fit the teeth accurately. To validate the effectiveness of automatically placed attachments in guiding teeth along the predetermined path, this study employs finite element analysis to simulate the impact of attachments on teeth. Comparative analyses between the automated method and traditional manual techniques show that the proposed algorithm significantly enhances the precision and efficiency of attachment placement. Additionally, finite element simulations confirm the feasibility and effectiveness of this approach in clinical orthodontic applications, providing a novel technical pathway for automating attachment placement in orthodontic treatments and offering significant practical value for personalized and efficient orthodontic care.","Yiheng Lv,Guangshun Wei,Yeying Fan,Long Ma,Yuanfeng Zhou",Yiheng Lv
paper1084,A Task-driven Network for Mesh Classification and Semantic Part Segmentation,"Given the rapid advancements in geometric deep-learning techniques, there has been a dedicated effort to create mesh-based convolutional operators that act as a link between irregular mesh structures and widely adopted backbone networks. Despite the numerous advantages of Convolutional Neural Networks (CNNs) over Multi-Layer Perceptrons (MLPs), mesh-oriented CNNs often require intricate network architectures to tackle irregularities of a triangular mesh. These architectures not only demand that the mesh be manifold and watertight but also impose constraints on the abundance of training samples. In this paper, we note that for specific tasks such as mesh classification and semantic part segmentation, large-scale shape features play a pivotal role. This is in contrast to the realm of shape correspondence, where a comprehensive understanding of 3D shapes necessitates considering both local and global characteristics. Inspired by this key observation, we introduce a task-driven neural network architecture that seamlessly operates in an end-to-end fashion. Our method takes as input mesh vertices equipped with the heat kernel signature (HKS) and dihedral angles between adjacent faces. Notably, we replace the conventional convolutional module, commonly found in ResNet architectures, with MLPs and incorporate Layer Normalization (LN) to facilitate layer-wise normalization. Our approach, with a seemingly straightforward network architecture, demonstrates an accuracy advantage. It exhibits a marginal 0.1% improvement in the mesh classification task and a substantial 1.8% enhancement in the mesh part segmentation task compared to state-of-the-art methodologies. Moreover, as the number of training samples decreases to 1/50 or even 1/100, the accuracy advantage of our approach becomes more pronounced. In summary, our convolution-free network is tailored for specific tasks relying on large-scale shape features and excels in the situation with a limited number of training samples, setting itself apart from state-of-the-art methodologies.","Qiujie Dong , Xiaoran Gong, Rui Xu, Zixiong Wang, Junjie Gao, Shuangmin Chen, Shiqing Xin, Changhe Tu, Wenping Wang",Qiujie Dong
paper1087,Text-Image Conditioned Diffusion for Consistent Text-to-3D Generation,"By lifting the pre-trained 2D diffusion models into Neural Radiance Fields (NeRFs), text-to-3D generation methods have made great progress. Many state-of-the-art approaches usually apply score distillation sampling (SDS) to optimize the NeRF representations, which supervises the NeRF optimization with pre-trained text-conditioned 2D diffusion models such as Imagen. However, the supervision signal provided by such pre-trained diffusion models only depends on text prompts and does not constrain the multi-view consistency. To inject the cross-view consistency into diffusion priors, some recent works finetune the 2D diffusion model with multi-view data, but still lack fine-grained view coherence. To tackle this challenge, we incorporate multi-view image conditions into the supervision signal of NeRF optimization, which explicitly enforces fine-grained view consistency. With such stronger supervision, our proposed text-to-3D method effectively mitigates the generation of floaters (due to excessive densities) and completely empty spaces (due to insufficient densities). Our quantitative evaluations on the T$^3$Bench dataset demonstrate that our method achieves state-of-the-art performance over existing text-to-3D methods. We will make the code publicly available.","Yuze He, Yushi Bai, Matthieu Lin, Jenny Sheng, Yubin Hu, Qi Wang, Yu-Hui Wen, Yong-Jin Liu",Yuze He
paper1088,Generated Realistic Noise and Rotation-Equivariant Models for Data-Driven Mesh Denoising,"3D mesh denoising is a crucial pre-processing step in many graphics applications. However, existing denoising models trained on the data with additive white noise struggle to effectively handle noise in captured 3D meshes with rich features, resulting in the loss of fine geometric details during the denoising process. This paper presents a rotation-Equivariant model-based Mesh Denoising (EMD) method and a Realistic Mesh Noise Generation (RMNG) model to address this issue. Our EMD method leverages rotation-equivariant features and self-attention weights of geodesic patches to achieve state-of-the-art (SOTA) results in accurately preserving the underlying features while removing mesh noise. The RMNG model, based on the Generative Adversarial Networks (GANs) architecture, generates massive amounts of realistic noisy and noiseless mesh pairs data for data-driven mesh denoising model training, significantly benefiting real-world denoising tasks. To address the smooth degradation and loss of sharp edges commonly observed in captured meshes, we further introduce varying levels of Laplacian smoothing to input meshes during the paired training data generation, endowing the trained denoising model with feature recovery capabilities. Experimental results demonstrate the superior performance of our proposed method in preserving fine-grained features while removing noise on real-world captured meshes.","Sipeng Yang, Wenhui Ren, Xiwen Zeng, Qingchuan Zhu, Hongbo Fu, Kaijun Fan, Lei Yang, Jingping Yu, Qilong Kou, Xiaogang Jin",Sipeng Yang
paper1090,High-precision teeth reconstruction based on automatic multimodal fusion with CBCT and IOS,"In digital orthodontic treatment, the high-precision reconstruction of complete teeth, encompassing both the crown and the actual root, plays a pivotal role. Current mainstream techniques, prioritizing the high resolution of intraoral scanned models (IOS), are confined to using IOS data for orthodontic treatments. However, the lack of root information in the IOS data may lead to complications such as dehiscence. In contrast, Cone Beam Computed Tomography (CBCT) data encompasses comprehensive dental information. Nonetheless, the radiative nature of CBCT scans renders them unsuitable for repeated examinations in a short time, and their lower scanning precision leads to suboptimal segmentation outcomes, hindering the accurate representation of dental occlusal relationships. Therefore, in order to fully utilize the complementarity between dental multimodal datum, we propose a method for high-precision 3D teeth model reconstruction based on IOS and CBCT, which mainly consists of two parts: global rigid registration and local nonrigid registration. We extract the priori information of dental arch curves for coarse alignment to provide a good initial position for the Iterative Closest Point (ICP) algorithm, and design a conformal parameterization method for a single tooth to effectively obtain the point correspondence between IOS and CBCT crowns. The rough crown of the CBCT will gradually fit towards the IOS through iterative optimization of nonrigid registration. The experimental results show that our method robustly fuses the advantageous features of IOS and CBCT, and the high-precision 3D teeth model reconstructed by the method in this paper can be effectively used in clinical orthodontic treatment.","Zhiyuan Ren, Long Ma, Minfeng Xu, Guangshun Wei, Shaojie Zhuang,
Yuanfeng Zhou",Zhiyuan Ren
paper1097,Alternating Size Field Optimizing and Parameterization Domain CAD Model Remeshing,"Tessellating CAD models into triangular meshes is a long-lasting problem. Size field is widely used to accommodate varieties of requirements in remeshing, and it is usually discretized and optimized on a prescribed background mesh and kept constant in the subsequent remeshing procedure. Instead, we propose optimizing the size field on the current mesh, then using it as guidance to generate the next mesh. This simple strategy eliminates the need of building a proper background mesh and greatly simplifies the size field query. For better quality and convergence, we also propose a geodesic distance based initialization and adaptive re-weighting strategy in size field optimization. Similar to existing methods, we also view the remeshing of a CAD model as the remeshing of its parameterization domain, which guarantees that all the vertices lie exactly on the CAD surfaces and eliminates the need for costly and error-prone projection operations. However, for vertex smoothing which is important for mesh quality, we carefully optimize the vertex's location in the parameterization domain for the optimal Delaunay triangulation condition, along with a high-order cubature scheme for better accuracy. Experiments show that our method is fast, accurate and controllable. Compared with state-of-the-art methods our approach is fast and usually generates meshes with smaller Hausdorff error, larger minimal angle with a comparable number of triangles.","Shiyi Wang, Bochun Yang, Hujun Bao, Jin Huang",Shiyi Wang
paper1108,Towards Geodesic Ridge Curve for Region-wise Linear Representation of Geodesic Distance Field,"This paper addresses the challenge of representing geodesic distance fields on triangular meshes in a piecewise linear manner. Unlike general scalar fields, which often assume piecewise linear changes within each triangle, geodesic distance fields pose a unique difficulty due to their non-differentiability at ridge points, where multiple shortest paths may exist. An interesting observation is that the geodesic distance field exhibits an approximately linear change if each triangle is further decomposed into sub-regions by the ridge curve. However, computing the geodesic ridge curve is notoriously difficult. Even when using exact algorithms to infer the ridge curve, desirable results may not be achieved, akin to the well-known medial-axis problem. In this paper, we propose a two-stage algorithm. In the first stage, we employ Dijkstra’s algorithm to cut the surface open along the dual structure of the shortest path tree. This operation allows us to extend the surface outward (resembling a double cover but with distinctions), enabling the discovery of longer geodesic paths in the extended surface. In the second stage, any mature geodesic solver, whether exact or approximate, can be employed to predict the real ridge curve. Assuming the fast marching method is used as the solver, despite its limitation of having a single marching direction in a triangle, our extended surface contains multiple copies of each triangle, allowing various geodesic paths to enter the triangle and facilitating ridge curve computation. We further introduce a simple yet effective filtering mechanism to rigorously ensure the connectivity of the output ridge curve. Due to its merits, including robustness and compatibility with any geodesic solver, our algorithm holds great potential for a wide range of applications. We demonstrate its utility in accurate geodesic distance querying and high-fidelity visualization of geodesic iso-lines.","Wei Liu, Pengfei Wang, Shuangmin Chen, Shiqing Xin, Changhe Tu, Ying He, Wenping Wang",Wei Liu
paper1111,Construction of the Ellipse with Maximum Area Inscribed in an Arbitrary Convex Quadrilateral,"An ellipse can be uniquely determined by five tangents. Given a convex quadrilateral, there are infinite ellipses inscribed in it, but the one with maximum area is unique. Finding the largest ellipse inscribed in a given convex quadrilateral is a very difficult problem. In this paper, we give a conscience and effective solution of this problem. Our solution is composed of three steps: First, we transform the problem from the maximal ellipse construction problem into the minimal quadrilateral construction problem by an affine transformation. And then, we convert the construction problem into a conditional extremum problem by analyzing the key angles. At last, we derive the solution of the conditional extremum problem with Lagrangian multiplier. Based on the conclusion, we designed an algorithm to achieve the construction. The numerical experiment shows that the ellipse constructed by our algorithm has the maximum area. It is interesting and surprising that our constructions only need to solve quadratic equations, which means the geometric information of the ellipse can even be derived with compass and straightedge constructions. The solution of this problem means all the construction problems of conic with extremum area from given pure tangents are solved, which is a necessary step to solve more problems of constructing ellipses with extremum areas.","Long Ma, Yuanfeng Zhou",Long Ma
paper1124,Real-Time Volume Rendering with Octree-Based Implicit Surface Representation,"Recent breakthroughs in neural radiance fields have significantly advanced the field of novel view synthesis and 3D reconstruction from multi-view images. However, the prevalent neural volume rendering techniques often suffer from long rendering time and require extensive network training. To address these limitations, recent initiatives have explored explicit voxel representations of scenes to expedite training. Yet, they often fall short in delivering accurate geometric reconstructions due to a lack of model compactness. In this paper, we propose a novel octree-based approach for the reconstruction of implicit surfaces from multi-view images. Leveraging an explicit, network-free data structure, our method substantially increases rendering speed, achieving real-time performance. Moreover, our reconstruction technique demonstrates remarkable efficiency, comparable to state-of-the-art network-based training methods.","Jiaze Li, Luo Zhang, Jiangbei Hu, Zhebin Zhang, Hongyu Sun, Gaochao Song, Ying He",Jiaze Li
paper1132,3D Shape Descriptor Design Based on HKS and Persistent Homology with Stability Analysis,"In recent years, with the rapid development of the computer aided geometric design and computer graphics, a large number of 3D models have emerged, making it a challenge to quickly find models of interest. As a concise and informative representation of 3D models, shape descriptors are a key factor in achieving effective retrieval. In this paper, we propose a novel global descriptor for 3D models that incorporates both geometric and topological information. We refer to this descriptor as the persistent heat kernel signature descriptor (PHKS). Constructed by concatenating our isometry-invariant geometric descriptor with topological descriptor, PHKS possesses exceptional recognition ability, while remaining insensitive to noise and can be efficiently calculated. Retrieval experiments of 3D models on the benchmark datasets show considerable performance gains of our method compared to other descriptors based on HKS and advanced topological descriptors.","Zitong He, Peisheng Zhuo, Hongwei Lin",Zitong He
paper1134,Physics-Aware Iterative Learning and Prediction of Saliency Map for Bimanual Grasp Planning,"Leaning the skill of object bimanual grasping can extend the capabilities of robotic systems when grasping large or heavy objects. However, it requires much larger search space for grasp points than single-hand grasping and a large number of bimanual grasping annotations for network learning, making both data-driven or analytical grasping methods inefficient and insufficient. We propose a framework for bimanual grasp saliency learning that aims to predict the contact points for bimanual grasping based on existing human single-handed grasping data. We learn saliency corresponding vectors through minimal bimanual contact annotations that establishes correspondences between grasp positions of both hands, capable of eliminating the need for training a large-scale bimanual grasp dataset. The existing single-handed grasp saliency value serves as the initial value for bimanual grasp saliency and we learn a saliency adjusted score that adds the initial value to obtain the final bimanual grasp saliency value, capable of predicting preferred bimanual grasp positions from single-handed grasp saliency. We also introduce a physics-balance loss function and a physics-aware refinement module that enables physical grasp balance, capable of enhancing the generalization for unknown objects. Comprehensive experiments in simulation and comparisons on dexterous grippers have demonstrated that our method can achieve balanced bimanual grasping effectively.","Shiyao Wang, Xiuping Liu, Charlie Wang, Jian Liu",Jian Liu
paper1148,Feature-preserving Quadrilateral Mesh Boolean Operation with Cross-Field Guided Layout Blending,"Compared to triangular meshes, high-quality quadrilateral meshes offer significant advantages in the field of simulation. However, generating high-quality quadrilateral meshes has always been a challenging task. By synthesizing high-quality quadrilateral meshes based on existing ones through Boolean operations such as mesh intersection, union, and difference, the automation level of quadrilateral mesh modeling can be improved. This significantly reduces modeling time. We propose a feature-preserving quadrilateral mesh Boolean operation method that can generate high-quality all-quadrilateral meshes through Boolean operations while preserving the geometric features and shape of the original mesh. Our method, guided by cross-field techniques, aligns mesh faces with geometric features of the model and maximally preserves the original mesh's geometric shape and layout. Compared to traditional quadrilateral mesh generation methods, our approach demonstrates higher efficiency, offering a substantial improvement to the pipeline of mesh-based modeling tools.","Weiwei Zheng, Haiyan Wu, Gang Xu, Ran Ling, Renshu Gu",Haiyan Wu
paper1159,FuncScene: Function-centric Indoor Scene Synthesis via a Variational AutoEncoder Framework,"One of the main challenges of indoor scene synthesis is preserving the functionality of synthesized scenes to create practical and usable indoor environments. Function groups exhibit the capability of balancing the global structure and local scenes of an indoor space. In this paper, we propose a function-centric indoor scene synthesis framework, named FuncScene. Our key idea is to use function groups as an intermedium to connect the local scenes and the global structure, thus achieving a coarse-to-fine indoor scene synthesis while maintaining the functionality and practicality of synthesized scenes. Indoor scenes are synthesized by first generating function groups using generative models and then instantiating by searching and matching the specific function groups from a dataset. On the other hand, the proposed framework also makes it easier to achieve multi-level generation control of scene synthesis, which is challenging for previous works. Extensive experiments on various indoor scene synthesis tasks demonstrate the validity of our method. Qualitative and quantitative evaluations show the proposed framework outperforms the existing state-of-the-art.","Wenjie Min, Wenming Wu, Gaofeng Zhang, Liping Zheng",Wenjie Min
paper1166,PointeNet: A Lightweight Framework for Effective and Efficient Point Cloud Analysis,"The conventional wisdom in point cloud analysis predominantly explores 3D geometries. It is often achieved through the introduction of intricate learnable geometric extractors in the encoder or by deepening networks with repeated blocks. However, these methods contain a significant number of learnable parameters, resulting in substantial computational costs and imposing memory burdens on CPU/GPU. Moreover, they are primarily tailored for object-level point cloud classification and segmentation tasks, with limited extensions to crucial scene-level applications, such as autonomous driving. To this end, we introduce PointeNet, an efficient network designed specifically for point cloud analysis. PointeNet distinguishes itself with its lightweight architecture, low training cost, and plug-and-play capability, while also effectively capturing representative features. The network consists of a Multivariate Geometric Encoding (MGE) module and an optional Distance-aware Semantic Enhancement (DSE) module. MGE employs operations of sampling, grouping, pooling, and multivariate geometric aggregation to lightweightly capture and adaptively aggregate multivariate geometric features, providing a comprehensive depiction of 3D geometries. DSE, designed for real-world autonomous driving scenarios, enhances the semantic perception of point clouds, particularly for distant points. Our method demonstrates flexibility by seamlessly integrating with a classification/segmentation head or embedding into off-the-shelf 3D object detection networks, achieving notable performance improvements at a minimal cost. Extensive experiments on object-level datasets, including ModelNet40, ScanObjectNN, ShapeNetPart, and the scene-level dataset KITTI, demonstrate the superior performance of PointeNet over state-of-the-art methods in point cloud analysis. Notably, PointeNet outperforms PointMLP with significantly fewer parameters on ModelNet40, ScanObjectNN, and ShapeNetPart, and achieves a substantial improvement of over 2% in 3D AP-R40 for PointRCNN on KITTI with a minimal parameter cost of 1.4 million.","Lipeng Gu, Xuefeng Yan, Liangliang Nan, Dingkun Zhu, Honghua Chen, Weiming Wang, Mingqiang Wei",Lipeng Gu
paper1167,3D Auxetic Linkage Based on Kirigami,"The structural design of 3D auxetic linkages is a burgeoning field in digital manufacturing. This article presents a novel algorithm for designing 3D auxetic linkage structures based on Kirigami principles to address existing limitations. The 3D input model is initially mapped to a 2D space using conformal mapping based on the BFF method. This is followed by 2D re-meshing using an equilateral triangle mesh. Subsequently, a 3D topological mesh of the auxetic linkage is calculated through inverse mapping based on directed area. We then introduce new basic rotating and non-rotating units, employing them as the initial structure of the 3D auxetic linkage in accordance with Kirigami techniques. Lastly, a deformation energy function is defined to optimize the shape of the rotating units. The vertex coordinates of the non-rotating units are updated according to the optimized positions of the rotating units, thereby generating an optimal 3D auxetic linkage structure. Experimental results validate the effectiveness and accuracy of our algorithm. Quantitative analyses of structural porosity and optimization accuracy, as well as comparisons with related works, indicate that our algorithm yields structures with smaller shape errors.","Xiaopeng Sun, Shihan Liu, Zhiqiang Luo",Xiaopeng Sun
paper1173,BrepMFR: Enhancing Machining Feature Recognition in B-rep Models through Deep Learning and Domain Adaptation,"Feature Recognition (FR) plays a crucial role in modern digital manufacturing, serving as a key technology for integrating Computer-Aided Design (CAD), Computer-Aided Process Planning (CAPP) and Computer-Aided Manufacturing (CAM) systems. The emergence of deep learning methods in recent years offers a new approach to address challenges in recognizing highly intersecting features with complex geometric shapes. However, due to the high cost of labeling real CAD models, neural networks are usually trained on computer-synthesized datasets, resulting in noticeable performance degradation when applied to real-world CAD models. Therefore, we propose a novel deep learning network, BrepMFR, designed for Machining Feature Recognition (MFR) from Boundary Representation (B-rep) models. We transform the original B-rep model into a graph representation as network-friendly input, incorporating local geometric shape and global topological relationships. Leveraging a graph neural network based on Transformer architecture and graph attention mechanism, we extract the feature representation of high-level semantic information to achieve machining feature recognition. Additionally, employing a two-step training strategy under a transfer learning framework, we enhance BrepMFR's generalization ability by adapting synthetic training data to real CAD data. Furthermore, we establish a large-scale synthetic CAD model dataset inclusive of 24 typical machining features, showcasing diversity in geometry that closely mirrors real-world mechanical engineering scenarios. Extensive experiments across various datasets demonstrate that BrepMFR achieves state-of-the-art machining feature recognition accuracy and performs effectively on CAD models of real-world mechanical parts.","Shuming Zhang, Zhidong Guan, Hao Jiang, Xiaodong Wang, Pingan Tan",Shuming Zhang
paper1184,Voronoi-based Splinegon Decomposition and Shortest-Path Tree Computation,"In motion planning, two-dimensional (2D) splinegons are typically used to represent the contours of 2D objects. In general, a 2D splinegon must be pre-decomposed to support rapid queries of the shortest paths or visibility. Herein, we propose a new region decomposition strategy, known as the Voronoi-based decomposition (VBD), based on the Voronoi diagram of curved boundary-segment generators (either convex or concave). The number of regions in the VBD is O(n+c1). Compared with the well-established horizontal visibility decomposition (HVD), whose complexity is O(n+c2), the VBD decomposition generally contains less regions because c1 < c2, where n is the number of the vertices of the input splinegon, and c1 and c2 are the number of inserted vertices at the boundary. We systematically discuss the usage of VBD. Based on the VBD, the shortest path tree (SPT) can be computed in linear time. Statistics show that the VBD performs faster than HVD in SPT computations. Additionally, based on the SPT, we design algorithms that can rapidly compute the visibility between two points, the visible area of a point/line-segment, and the shortest path between two points.","Xiyu Bao, Meng Qi, Chenglei Yang, Wei Gai",Wei Gai
paper1194,Unpaired High-Quality Image-Guided Infrared and Visible Image Fusion via Generative Adversarial Network,"Current infrared and visible image fusion (IVIF) methods lack ground truth and require prior knowledge to guide the feature fusion process. However, in the fusion process, these features have not been placed in an equal and well-defined position, which causes the degradation of image quality. To address this challenge, this study develops a new end-to-end model, termed unpaired high-quality image-guided generative adversarial network (UHG-GAN). Specifically, we introduce the high-quality image as the reference standard of the fused image and employ a global discriminator and a local discriminator to identify the distribution difference between the high-quality image and the fused image. Through adversarial learning, the generator can generate images that are more consistent with high-quality expression. In addition, we also designed the laplacian pyramid augmentation (LPA) module in the generator, which integrates multi-scale features of source images across domains so that the generator can more fully extract the structure and texture information. Extensive experiments demonstrate that our method can effectively preserve the target information in the infrared image and the scene information in the visible image and significantly improve the image quality.","Hang Li, Zheng Guan, Xue Wang, Qiuhan Shao",Hang Li
,Shape-preserving interpolation on surfaces via variable-degree splines [CAGD],"This paper proposes two, geodesic-curvature based, criteria for shape-preserving interpolation on smooth surfaces, the first criterion being of non-local nature, while the second criterion is a local (weaker) version of the first one. These criteria are tested against a family of on-surface
splines obtained by composing the parametric representation of the supporting surface with variable-degree (≥3) splines amended with the preimages of the shortest-path geodesic arcs connecting each pair of consecutive interpolation points. After securing that the interpolation problem is well posed, we proceed to investigate the asymptotic behaviour of the proposed on-surface splines as degrees increase. Firstly, it is shown that the local-convexity sub-criterion of the local criterion is satisfied. Second, moving to non-local asymptotics, we prove that, as degrees increase, the interpolant tends uniformly to the spline curve consisting of the shortest-path geodesic arcs. Then, focusing on isometrically parametrized developable surfaces, sufficient conditions are derived, which secure that all criteria of the first (strong) criterion for shape-preserving interpolation are met. Finally, it is proved that, for adequately large degrees, the aforementioned sufficient conditions are satisfied. This permits to build an algorithm that, after a finite number of iterations, provides a
shape-preserving interpolant for a given data set on a developable surface.","P.D. Kaklis, S. Stamatelopoulos, A.-A.I. Ginnis",P.D. Kaklis
以下为poster部分,,,,
paper1103,MCR-Net: A robust end-to-end craniofacial mesh registration network,"Craniofacial registration is crucial for craniofacial reconstruction, which has been widely used in forensic medicine, criminal investigation, archaeology and etc. Complex topology and low-quality three-dimensional (3D) models make craniofacial registration challenging in the high degree of freedom deformation of point clouds. In this work, we propose a craniofacial mesh registration network, MCR-Net, that deforms a reference craniofacial mesh end-to-end to achieve registration with a target craniofacial mesh. MCR-Net infers the displacement of each vertex while keeping the mesh connectivity of the reference craniofacial mesh fixed. MCR-Net employs a differentiable mesh sampling operator, which enables the registration of the reference and the target models with different mesh densities while maintaining the high-quality mesh connectivity of the reference model. In order to align the reference and target models, the Wasserstein distance loss combined with the Chamfer loss is introduced as an unsupervised loss function. And a combination of local permutation invariant loss and mesh Laplacian loss is used to maintain the local quality of the mesh. Experimental results show the method has high registration accuracy and robustness to low-quality models.","Zhenyu Dai, Junli Zhao, Fuqing Duan, Xuesong Wang, Zhongke Wu, Zhenkuan Pan, Mingquan Zhou",Zhenyu Dai
paper1131,Automatic layout design for exhibition hall,"The design of an exhibition hall is a challenging task, which is a multiple target optimization problem.We introduce an automatic layout generation method for an exhibition hall or similar applications.The proposed method employs medial axis transfer results to divide the exhibition hall into several subspaces.Each subspace is filled with appropriate exhibits according to their topological order as the initial condition.Based on the cost function for the exhibition scenario, a simulated annealing method is introduced to optimize the layout in different subspaces to generate a suitable layout.Multiple types of exhibition halls were selected for experiments, and the generated results proved the effectiveness of the method.According to the design principles, user studies are conducted to compare the results from different methods. Compared with the existing methods, the proposed method has advantages in deployment efficiency and effectiveness, and it can be applied to various types of exhibitions.","Li Cao, Xiang Cheng, Wenming Wu, Liping Zheng",Li Cao
马建仓 AI 助手
尝试更多
代码解读
代码找茬
代码优化
1
https://gitee.com/hucorz/gmp2024_vue.git
git@gitee.com:hucorz/gmp2024_vue.git
hucorz
gmp2024_vue
gmp2024_vue
main

搜索帮助