WO2006058292A9 - Methods for retrieving shapes and drawings - Google Patents

Methods for retrieving shapes and drawings

Info

Publication number
WO2006058292A9
WO2006058292A9 PCT/US2005/042925 US2005042925W WO2006058292A9 WO 2006058292 A9 WO2006058292 A9 WO 2006058292A9 US 2005042925 W US2005042925 W US 2005042925W WO 2006058292 A9 WO2006058292 A9 WO 2006058292A9
Authority
WO
WIPO (PCT)
Prior art keywords
representations
dimensional
shape
descriptors
drawings
Prior art date
Application number
PCT/US2005/042925
Other languages
French (fr)
Other versions
WO2006058292A3 (en
WO2006058292A8 (en
WO2006058292A2 (en
Inventor
Karthik Ramani
Jiantao Pu
Original Assignee
Purdue Research Foundation
Karthik Ramani
Jiantao Pu
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Purdue Research Foundation, Karthik Ramani, Jiantao Pu filed Critical Purdue Research Foundation
Publication of WO2006058292A2 publication Critical patent/WO2006058292A2/en
Publication of WO2006058292A9 publication Critical patent/WO2006058292A9/en
Publication of WO2006058292A3 publication Critical patent/WO2006058292A3/en
Publication of WO2006058292A8 publication Critical patent/WO2006058292A8/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/532Query formulation, e.g. graphical querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/478Contour-based spectral representations or scale-space representations, e.g. by Fourier analysis, wavelet analysis or curvature scale-space [CSS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/752Contour matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • G06V20/647Three-dimensional objects by matching two-dimensional images to three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • G06V30/42Document-oriented image-based pattern recognition based on the type of document
    • G06V30/422Technical drawings; Geographical maps

Definitions

  • drawings and models of parts or structures are used for explanation of how the apparatus works, for indication of how to construct the apparatus, or for other purposes.
  • drawings and models may be graphically represented and stored in electronic versions, as in a computer system, and such systems may be used to operate on or analyze them.
  • FIG. 1 shows the projection of 2D images from a 3D model.
  • FIG. 2A and 2B are representations of an embodiment of a method using a 3D model projected onto a 2D plane.
  • FIG. 3 is a schematic representation of an embodiment of a drawing and/or model analysis and retrieval system.
  • FIG. 4 shows a comparison of different posing methods.
  • FIG. 5 shows a comparison of different posing methods.
  • FIG. 6 is a diagram representing four transitions to view generation, according to an example embodiment.
  • FIG. 7 shows orientations of an example 3D model.
  • FIG. 8 shows examples of polygonal area arrangements.
  • FIG. 9 shows an example of contact area calculation.
  • FIG. 10 shows embodiments of two different tessellation schemes, one along latitude and longitude and the other based on regular polyhedra.
  • FIG. 11 shows example orientation determinations.
  • FIG. 12 shows an example user interface with an example model and example drawings.
  • FIG. 13 is a flowchart showing an embodiment of a user-feedback method.
  • FIG. 14 shows an example of extending a two-dimensional drawing into three dimensions.
  • FIG. 15 shows an example of extending a two-dimensional drawing into three dimensions.
  • FIG. 16 shows a plot of efficiency and performance tradeoff.
  • FIG. 17 shows an example of a biased sampling of an example embodiment.
  • FIG. 18 shows the results of a retrieval discrimination evaluation.
  • FIG. 19 shows the average precision-recall curves for different methods.
  • FIG. 20 shows a diagram of an example model.
  • FIG. 21 shows a listing of example search results.
  • FIG. 22 shows an embodiment of a model analysis and comparison method.
  • FIG. 23 shows an embodiment of a model generation and comparison method.
  • FIG. 24 shows an example user interface with an example model and example drawings.
  • FIG. 25 shows embodiments of an example user interface with sketches and retrieved results.
  • FIG. 26 is a flowchart showing an embodiment of a user-feedback method.
  • FIG. 27 is a flowchart showing an embodiment of a user-feedback method.
  • FIG. 28 is a flowchart showing an embodiment of a user-feedback method.
  • FIG. 29 is a flowchart showing an embodiment of a user-feedback method.
  • FIG. 30 shows an embodiment of a drawing retrieval system.
  • FIG. 31 shows example retrieval results.
  • FIG. 32 shows example descriptor results.
  • FIG. 33 shows the concepts of precision and recall.
  • FIG. 34 illustrates a retrieval discrimination evaluation
  • FIG. 35 depicts the recall-precision curves for an example 3D model.
  • FIG. 36 shows an embodiment of a user interface with sketches and retrieved results. DETAILED DESCRIPTION
  • 2D input is intended to encompass any 2D depiction including, but not limited to, sketches drawn by hand or with the aid of computer, drawings, flowcharts, diagrams, tablets, etc.
  • Front view reflects the left-right and top-down relationships of shape of 3D models, top view the left-right and front-back relationships, side view the top-down and front-back relationships.
  • one way to determine the projection planes and directions is to find a robust bounding box that is in accordance with a human's perception mechanism.
  • the aim is to simulate the drawing process in which engineers express a 3D shape concept on a piece of paper.
  • 3D polygon meshes are used to represent 3D models, and shape contours are needed for view generation.
  • FIG. 3 An embodiment of an architectural arrangement for shape generation, orientation, and translation is shown schematically in FIG. 3.
  • the components of the architecture are implemented in a machine-accessible and/or readable medium and are accessible over a network.
  • the network may be wired, wireless, or a combination of wired and wireless.
  • the various components and techniques include: (1) 2D drawing generation; (2) 2D drawing descriptors; (3) user queries; and (4) shape retrieval results.
  • 2D Drawing Generation This bridges the gap between 3D models and 2D drawings by representing 3D models in the form of three orthogonal drawing-like views.
  • Two functions, 3D pose determination and 2D view generation, may be included.
  • the first function unifies the space between 2D drawings and 3D models by finding a set of robust orthogonal orientations (a pose) along which designers usually depict 3D models in the form of 2D drawings.
  • the second function sets up the correspondence between 2D drawings and 3D models by generating orthogonal drawing-like views or projections from 3D models along the pose orientations. Consequently, both 3D models and 2D drawings can be compared as 2D drawings.
  • the dashed box in FIG. 3 illustrates this generation. A more detailed explanation is provided herein and below.
  • the final retrieved results can be 2D drawings or 3D models, and they are ranked according to their similarities with the input query.
  • a 3D object as a polygonal surface model with uniform mass distribution is one example considered herein.
  • the object can be approximated by multiple polygon patches.
  • the pose determination step can generally be described as follows. Where a model is not built with CAD software and therefore does not have pre-defined viewing directions, such as with models built with digitizing tools such as scanners, we have to determine the desired pose that is in accordance with most designers' perceptions. Thus, given a 3D shape S represented by a series of polygons p t (O ⁇ i ⁇ n), a bounding box is found with three orthogonal axes (X, Y, Z) that encloses S and meets the criterion that similar shapes have similar bounding box orientations which should be in accordance with human perception.
  • the same bounding boxes with different axes mean different poses, i.e., given an object, the orientation with orthogonal axes (X, Y, Z) is different than the orientation with axes (-X, Y, Z).
  • the criterion can be used to align similar objects correctly and assure similar objects have similar orientation.
  • a unique bounding box that encloses a three-dimension model tightly is found so as to determine projection planes and directions.
  • the side-directions of the bounding box can be regarded as the projection planes, while the normals are the projection directions.
  • MND maximum normal distribution
  • the polygons of a representation of a 3D shape are separable into triangles.
  • the normal direction N ⁇ for each triangle ⁇ p ⁇ qVis computed and normalized. It is the cross product of any two edges:
  • the area of each triangle a of the polygonal representation of the 3D shape is calculated and the sum of the areas of all triangles with same or opposite normals are computed.
  • the normals that are located in the same direction belong to a particular distribution.
  • the center and the half-length of the bounding box are calculated.
  • the center and half-length can be defined by Equation (15) and half the longer diagonal length of the bounding box.
  • the boxes shown in bottom row are obtained by the MND method.
  • the bounding boxes are consistent with each other.
  • the result of the cross product of the above two normals is regarded as the third direction.
  • the three directions can be regarded as the three directions of the bounding box.
  • the time complexity is O( ⁇ ) and the space complexity is O(n 2 ).
  • an Inertial Principal Axes (IPA) method can be used to obtain the bounding box.
  • IPA Inertial Principal Axes
  • an orientation determination method based on stability analysis can be used. It arises from the premise that a human being tries to place an object at the most stable orientation and similar objects will own similar stable orientations. This concept can be explained by the examples shown in FIG. 7. It is natural for human being to place the "cup" object in the orientation shown in FIG. 7(A), because this position is more stable than the one in FIG. 7(B). To determine the stable orientations, static equilibrium conditions and concepts of physical stability can be used to determine pose orientations. Given a 3D polygon model and a horizontal plane P zero , there are three kinds of contacts between this model and the plane, as FIG. 8 shows. For the first two cases in FIG.
  • the contact area is the summed areas of all the contact polygons since they connect with each other.
  • the contact area is formed by the dispersed polygons with the horizontal plane.
  • the area bounded by these polygons is defined as virtual contact area (VCA).
  • VCA virtual contact area
  • VCA For the sake of computation efficiency, an embodiment of VCA disclosed herein uses an approximating strategy. For generality, this problem can be represented as shown in FIG. 9. Since it is not easy to determine the region of VCA, use the average area of two bounding boxes to approximate the contact area.
  • the first box is an axis-aligned bounding box A, whose corners are determined by the minimum and maximum coordinates along x and y axes:
  • the second bounding box is a quadrilateral B whose four corners pass through the centroids of the four outermost polygons.
  • the outermost polygons are determined by the distances from the respective four corners of the bounding box A to the center of the polygons.
  • the final contact area C is approximated as the average of quadrilateral A and B:
  • a 3D shape S can be represented as a 3D mesh composed of a series of discrete 2D polygons p, ( 0 ⁇ i ⁇ n -1 ) with uniform density.
  • a triple S ⁇ p x
  • (N,, A 1 , D 1 ), 0 ⁇ i ⁇ n - 1 ⁇ is used to represent a 3D shape, in which N, represents the normal of polygon p v , A 1 represents the area of polygon/?, and D 1 represents the signed distance between the mass center C and the polygon p t .
  • D 1 can be the distance between any predefined origin in 3D space and the polygon p t .
  • the mass center is adopted as the origin for the sake of simplicity.
  • the signed distance D 1 is equal to the signed projection of the vector from vertex p,, to the mass center C along the normal N,:
  • TAi is the summed area along the normal of the i th polygon.
  • D 0 is a middle variable.
  • "*3 ⁇ *8" compute the summed area TAi with the same normal and same distance.
  • "*10” is to find out the normal associated with the maximum summed area and its computation complexity is O(n). It can be concluded from the above pseudocodes that the total computation complexity is O(2n+0.5n 2 ). When an object contains too many polygons (e.g., 10 5 ⁇ 10 6 ), the computation load will be heavy.
  • a look-up table-based algorithm can be used with a computation complexity O(n+C), where C is a constant.
  • This process is similar to a tessellation process that divides a unit sphere along the latitude and longitude uniformly, as FIG.
  • Each of the octants has m 2 surface patches.
  • the normals that fall into the same surface patch are regarded as the same normal. Therefore, instead of comparing every two normals of all the polygons, all the normals are classified by traversing each polygon and checking which surface patch it belongs to. In this process, these patches act as a look-up table.
  • This modified algorithm is summarized in Table 2 in the form of C style pseudocodes. Table 2: An improved algorithm based on a look-up table
  • the discrete patches that constitute the sphere surface are not uniform. However, with the improved division resolution is provided so that the divided patches on the sphere will become more similar to each other. A division number of 1000 can be used. The scheme can be used because it is easy to decide in which cell a particular surface normal locates and the projection computation involved in the regular polyhedron based method is not needed.
  • the methods disclosed can retrieve orientations more similar to human perception than the EGI & CEGI methods.
  • the top row shows the results obtained by the method disclosed, while the bottom row shows the results by the EGI & CEGI methods.
  • VCA can incorporate these structural shapes into one uniform framework with the help of stability analysis.
  • FIG. 11 some examples from different domains are presented to show the generality of this proposed method. Some objects are formed by different parts, such as (a), (e) and (f). The plant object in (d) shows the robustness of this method since the leaves can be regarded as some noise.
  • the aim is to create multiple 2D representations of a given 3D model.
  • One such representation is analogous to a type of representation engineers use to represent 3D objects using 2D views. This type of representation can be referred to as the full view and is one in which all the features, outside as well as hidden, are displayed. Another type of representation can be used to display only the outside features; this type can be referred to as the silhouette view. Yet another type of view, called the contour view, can be used to depict the edges of the shadow cast by a 3D model.
  • contour level there are three different views along the principal axes; at the silhouette level, there are six different views; and at the full level there are three different views along the principal axes.
  • the representations of contour, silhouette and full level can be determined with the aid of a CAD system using predefined coordinate axes, or could be machine determined using the techniques described. Drawings and models in a CAD system typically have axis systems predefined by a user, and if no such axes exist then the disclosed methods can be used to define the axes. For example, a point cloud may have an arbitrary axis system that can be rotated to provide new axes using the techniques described. After the axis system has been rotated the contour, silhouette and full levels can then be determined.
  • the 3D shape-matching problem is transformed into how to measure the similarity between 2D views, which can be illustrated by FIG. 12.
  • a two-dimension shape distribution method is presented herein to measure the similarity between 2D views. It can be regarded as a kind of derivation from the 3D case. In other words, the similarity between views can be obtained by measuring their 2D shape distributions.
  • one embodiment of a process to compute the degree of the similarity between 2D shapes can be summarized as three steps.
  • One step is random sampling on view edges.
  • the views are formed by a series of line segments. Some of them may overlap with each other.
  • adopt a random sampling principal select a line segment from the view randomly, then pick a point on the line segment randomly and save it into an array named S.
  • Another step is shape distribution generation.
  • the Euclidean distance between two random sampled points is chosen to measure the shape features of polygons because other distance metrics are designed specially for 3D cases.
  • the 2D shape distribution can be generated.
  • FIG. 13 shows the two distributions formed by the views in FIG. 12. From the visual appearance, the two views are different greatly. The next step is to quantify this difference.
  • a final step is similarity measuring. Due to the fact that two different models may be of different size, a normalization step has to be taken to measure their difference on the basis of one common standard.
  • two normalization methods are available: (a) align the maximum D2 distance values, and (b) align the average D2 distance values.
  • the maximum values of the two shape distributions have to be adjusted to one same value, which is used to normalize the shape distribution.
  • the other one is to use the mean value of distance to normalize the shape distribution.
  • the second one is adopted as the normalization standard.
  • the similarity between two views can be obtained by calculating the difference between their distributions in the form of a histogram.
  • the Minkovski distance is used due to its simplicity, which can be expressed as
  • n is the divided histogram number of the shape distribution curve, s, and k is the probability at certain distance.
  • the 2D shape distribution approach has the same advantages as the 3D case. It is simple and easy to implement, and it also has some unique properties which are: (1) insensitivity to geometric noise; (2) invariance to translation, rotation and scaling; and (3) lack of necessity to find out the feature correspondences between models.
  • an additional step is needed. Its aim is to find out the correspondences between views of two models. If the view generation step is carried out without the step of culling the occluded triangles, then there are only three different views because the views generated from positive and negative directions are the same. If the step of culling the occluded triangles is taken, then there are six different views in which the projections along different directions are not the same because the internal structure is not taken into account. To determine the partnership of one view, compare it with all the views of another model and select the most similar one as the corresponding view. In this way, the views from different models can be grouped into a series of pairs. By adding the similarities of these view pairs together, the similarity between models can be obtained. Experiments below show that the method with the step of culling the occluded triangles can achieve a better retrieval performance. Embodiments for two methods for computing the shape similarity between
  • the first approach represents a drawing as a spherical function by transforming it from 2D space into 3D space and then employs a fast spherical harmonics transformation to get a rotation invariant descriptor.
  • the second method represents the shape of a 2D drawing from the statistics perspective as a distance distribution between two random sampled points.
  • spherical harmonics representation can be successfully applied to 3D shape matching. It arises on the sphere in the same way that the Fourier exponential function arises on the circle. According to the theory of spherical harmonics, a function f( ⁇ , ⁇ ) represented in a spherical coordinate can be approximated with a sum of its spherical harmonics Y"' ⁇ , ⁇ ) :
  • Equation (32) Equation (32) can be rewritten as
  • Equation (11) is an energy representation of the spherical function f( ⁇ , ⁇ ) .
  • f, ⁇ , ⁇ ) has a valuable property: rotating a spherical function does not change its L 2 norm, i.e., its energy as represented by Equation (12) is a rotation invariant.
  • a strategy dubbed 2.5D spherical harmonic representation can extract a series of rotation invariants by transforming a 2D drawing from 2D space into 3D space uniquely.
  • the name "2.5D” arises from the fact that a 2D drawing is represented in a 3D space. The transformation is explained by the following steps.
  • Equation (14) Given a 2D drawing D (e.g., FIG. 14(a)), compute its axis-align bounding box B as FIG. 14(b) shows according to Equation (14).
  • Its radius r is equal to half the longer diagonal length of bounding box B.
  • the purpose is to ensure sphere S can enclose 2D drawing D completely. As described later, the spherical radius is also used for normalization.
  • the 2D drawing lies in the equator plane of sphere S.
  • the obtained sphere is shown in FIG. 14(c). For the sake of simplicity, we can position this sphere into a coordinate system xyz. The sphere center locates at the origin and the equation plane lies in the xy plane.
  • intersection points (3) Generate a set of rays uniformly, which start from the sphere center c and locate in plane xy where the 2D drawing lies, and compute the intersections between these rays and 2D drawing D.
  • r is the radius of sphere S.
  • the r is determined uniquely, while for an intersection point/?, d t is also uniquely determined.
  • the corresponding ⁇ t obtained by Equation (39) is unique. Therefore, a 2D drawing is uniquely transformed into a 3D spherical representation, i.e., the correspondence between a 2D drawing and its spherical function is one to one. This process is dubbed a 2.5D transformation, and FIGS. 14(e) ⁇ 14(g) show the final 3D representation of the drawing in FIG. 14(a) from different perspectives.
  • the proposed 2.5D representation transforms a 2D drawing by elevating and projecting it on the surface of a cylinder.
  • FIG. 15 shows an example about this transformation. From this example, notice that the geometric information is represented clearly in 3D space along the surface of a cylinder.
  • Equation (12) a fast spherical harmonics transformation method in which a spherical function of bandwidth B is sampled on the 2B-many Chebyshev points and not the B-many Gaussian points can be used. These sampled points form a 2Bx2B equiangular grid along the longitude and latitude of a sphere, i.e., the sampling nodes ⁇ ⁇ ,, ⁇ t ) ⁇ on this equiangular grid are
  • Equation (18) can be used to decide at which Chebyshev node (i,j) a sample ( ⁇ t , ⁇ t ) locates.
  • a 3D shape has been represented as a signature named shape distribution that is formed by random points sampled uniformly from the shape surface.
  • a 2D shape distribution analog is derived. Experiments show this derivation is good at computing the similarity between 2D drawings, and it also allows users to emphasize local shapes by adjusting sampling strategy.
  • This derived 2D shape distribution method will be described in detail as follows.
  • a 2D drawing is usually composed of some basic geometric entities, such as lines, circles, and arcs. For later sampling purposes, a discretization process is adopted to transform all entities contained in a drawing into a set of line segments. In this way, a 2D drawing S can be represented as
  • n the total number of the line segments included in stroke S
  • (x / , yi) and (jt /+ i, y ⁇ + ⁇ ) are the two ending points of a line segment.
  • an edge-enhancing procedure e.g., Canny edge detector
  • a first step is to compute the summed length of all line segments included in stroke S.
  • the summed length is saved into table T with size n, where n is the total number of the line segments.
  • Table T can be represented by a linear array as Equation (22) shows.
  • L is the Euclidean distance between two points.
  • a second step is to generate a random real number r between 0 and the total length t n .j, and then use the well-known binary-search algorithm to find out the position m where r locates in the table. This found position corresponds to line segment ((*, principal, )> regular,), U 1n+1 , ;y ra+1 )) .
  • a third step is to generate a random real number I between 0 and 1. According to Equation (23), we can get a sample point (x L , y L ) and save it into an array A.
  • sampling density In the sampling procedure, we have to consider two problems: sampling density and sampling method. From the perspective of statistics, more samples will approximate the original shape more precisely and also need more computing resources (e.g., memory and time). Thus, there is a tradeoff between efficiency and precision.
  • FIG. 16 shows experimental results considering this tradeoff. The horizontal axis represents the sampling density, while the vertical axis represents the time cost of sampling procedure or the differences under different sampling densities. It can be concluded from the curves that for a 2D drawing, 10 5 sampling point pairs are enough to achieve a better balance between precision and efficiency.
  • the next step is to build the corresponding distance histogram which is described by a shape function.
  • D2 i.e., the Euclidean distance between two points, as the shape function. Since 2D drawings usually have different geometric sizes, a normalization process is needed to account for this difference. A distance histogram can be built and normalized in this way.
  • a distance histogram H consists of h parts with uniform distance.
  • Equation (24) Second, determine a standard value N used for normalization.
  • N used for normalization.
  • Equation (24) uses the maximum distance among all sampled point pairs as the standard value.
  • the second one uses the average distance of all sample point pairs as the standard value.
  • the shape histogram generated by a uniform sampling strategy reflects the global geometric properties of a 2D drawing.
  • a biased sampling strategy has also been implemented: users are allowed to specify a higher sampling rate on their desired local shape to emphasize the desired local shape.
  • FIG. 17(a) ⁇ 17(d) two similar drawings and their shape histograms are shown in FIG. 17(a) ⁇ 17(d) respectively.
  • FIG. 17(a) if users want to emphasize the local shape composed by the rectangle and the big middle circle, they can super sample them interactively.
  • the super sampling rate of the local shape composed of the rectangle and the big middle circle changes from 200% to 500%, the corresponding histogram becomes more similar to the histogram of the shape shown in (b).
  • Minkovski distance L n is used because of its simplicity. Therefore, for two histograms, i.e., H 1 and H 2 , the similarity V7is
  • Equation (26) can be used to compute their similarity directly. Since a 3D model is represented by three orthogonal drawing-like views, a procedure is needed to find the correspondence between drawing-like views from different models. To simplify this process, the minimum values between all possible view-pairs can be used to represent their similarity:
  • m h represents view i of model M 1
  • m 2j represents viewy of model M 2
  • (m ⁇ ,m 2j ) represents a view-pair between 3D models. It is worth pointing out that a view-pair can not be computed twice.
  • C s is the similarity obtained by 2.5D spherical harmonics method
  • Q is the similarity obtained by 2D distance histogram method
  • w s and w d are the weight values of the respective methods.
  • Higher weight value means that the corresponding method plays a more important role in differentiating a 2D drawing.
  • the "Combination" represents the combination of our two proposed methods using equal weights, i.e. (0.5, 0.5). From this precision-recall curve, it can be seen that this combined approach has the best performance.
  • the above mentioned 2D descriptors can be used on any 2D drawing, projection, sketch, etc. to create a database of 2D descriptors associated with drawings or models and are capable of being searched.
  • FIG. 20 and FIG. 21 illustrate the integration of our search system with Unigraphics.
  • FIG. 22 illustrates the pipeline of taking 3D models as input, convert into 2D views, and then use 2D algorithms to search similar 3D shapes. This is an approach from 3D to 2D and back to 3D.
  • FIG. 23 illustrates the pipeline that takes 2D input, converts it into 3D models, and uses a 3D comparison and search algorithm. It is noted, that the search system presented herein is very easily integrated with any CAD systems. A user can use either the sketch or 3D models created by the CAD system to query a database. In this case, the 3D models are quickly converted into 2D views using the methods and techniques described above.
  • FIG. 24 shows the user interface of our prototype system in which users can express their concept freely. Their emphasis on some views can be realized by adjusting the weights.
  • the sketch based user interface allows users to express their attention freely. People tend to sketch objects with fragmented boundary contours and few other lines that are not very geometrically accurate. The most frequently chosen views are not characteristic views, but instead ones that are simpler to draw (front, side, and top views). Users can also specify weights to emphasize certain views of the shape. In this way, the similarity expressed in Equation (8) can be modified as
  • the retrieval process is robust for snatch hand-drawn input.
  • the similarity measurement is based on a statistical method, in which a large amount of points will be sampled on the edges of input images. This process is insensitive to noises. Therefore, irregular or snatch input will not influence the retrieval performance greatly.
  • FIG. 25 shows the framework of our implemented sketch user interface and its visual appearance: (a) is a 3D model retrieval example using freehand sketches; and (b) is a 2D drawing retrieval example using freehand sketches. In fact, users are also allowed to retrieve 2D drawings or 3D models using a 2D drawing or a 3D model.
  • One system provides several feedback ways for users to interactively refine retrieval results.
  • One method is to provide weight value adjustment for orthogonal views.
  • a 3D model may be described by three orthogonal views. Different views reflect certain shape characteristics from different perspectives.
  • users are allowed to emphasize certain views by adjusting their weights as indicated above. This interaction is illustrated by FIG. 26.
  • users can refine the results by assigning a higher weight to the views that play a more important role in the desired shape. The larger the weight value is, the more important the view will be. Users can repeat this process to refine the searched results.
  • Another method provides for initial sketch input editing. In some cases, just adjusting weight value is not enough because the initial sketch is not sufficient to represent the desired models and sometimes users make mistakes. Enabling a user to edit the initial sketch input is an efficient way to refine the search results, especially when the searched results can be displayed dynamically along the modification of the sketch. This interaction can be illustrated in FIG. 27. Users can modify the input sketches according to the retrieved results.
  • Yet another method provides for interacting with the retrieved results.
  • retrieved results are used to help users to check and modify the improper input.
  • a third approach can also be used to edit the views of the retrieved 3D models or 2D drawings to improve and refine the retrieval results.
  • the 2D views of a 3D model can be generated automatically. Therefore, users can input a simple coarse sketch first and find similar 3D models as another round sketch input. Based on the generated views, users can edit it according to their shape concepts. Because the generated views are more regular than the hand-drawn ones, this kind of feedback interaction is expected to achieve better performance.
  • the feedback process can be illustrated by FIG. 28. In contrast with the previous feedback mechanism, this feedback mechanism allows uses to edit the views of the retrieved models.
  • users can provide an initial input and examine the results of the search.
  • a user then can either refine the results given using the techniques described for example, or can provide yet additional input to change the nature of the search. For example a user could jump to another shape after examining the initial search results or one could input a new or changed parameter to obtain a subset of the set of results more closely meeting the user's needs and/or arranged in a hierarchy according to the user's needs.
  • the term "parameter" can include, among other data or information, a 2D or 3D representation of all or part of an object.
  • a user could also provide another sketch input for the system to search or cross-reference among prior search results.
  • a user could also provide additional details to the initial sketch, or erase existing features, and conduct the search anew or obtain a subset of the set of results more closely meeting the user's needs and/or arranged in a hierarchy according to the user's needs. For example, the user could add an aperture or erase a line and conduct the search again. Other methods of changing the direction and character of the search are contemplated.
  • users can retrieve 2D drawings or 3D models in a coarse-to-fine way. Generally, it is not easy for users to find proper weight values or modify the sketches just within one time. Users may have to repeat this process several times. To accelerate this process, a dynamic retrieval mechanism is provided, as FIG. 29 shows. A user's behaviors are monitored by the system all the time.
  • the shape can be represented at global and local level.
  • the global representation is the extracted based on the overall information without considering the features of the sketch or view.
  • features are extracted using feature recognition methods and representations are generated based on the relationship of different features.
  • the following figures show the shape distribution of the overall view and those of features identified from the view.
  • the local features and models are indexed.
  • the index is built on top of the commercial database, such as Oracle, SQL server, and Sybase.
  • the search algorithm uses overall representation first and then uses local features to improve the search results. Different feature vectors are extracted and stored in the database. Therefore, after the search results with one feature vector, a user can refine the search results with other feature vector. In other words, using multi-step search methods improve the effectiveness of the search. In the area of searching, it is also possible to build a searchable database of
  • 3D models and 2D drawings that includes not only the shape descriptors described, but also other characteristics of the models or drawings, such as size, shape, weight, or any other types of performance characteristics like horsepower.
  • the system may also return the other characteristics of the models or drawings. It is possible to further refine the results by combining the techniques described above with further refinements of the associated characteristics such as size, weight, etc.
  • a library is built to contain an arbitrary number of 3D models or 2D drawings.
  • the library can contain a mixture of models and drawings as well as other information.
  • the shape descriptor operators mentioned above can be used to generate a descriptor to uniquely identify each of the models or drawings; these descriptors can then be associated with the particular models or drawings that the descriptors were derived from.
  • a user provides input to the system which can be in the form of a model, drawing or sketch input at which point the system determines a unique descriptor for the input as described above.
  • the term "unique descriptor" or “descriptor” can refer to a single value, or a set of values, that are associated with a given drawing, model, or input.
  • the system can then search the library for a model or drawing descriptor that approximates the input using the methods described above.
  • the system returns a candidate model or drawing, or a set of candidate models or drawings, that are similar to the descriptor of the user input.
  • the system can further provide methods described above that provide interactivity to assist the user in further refining the search results.
  • ShapeLab 2D & 3D shape retrieval system
  • FIG. 30 shows the framework of our implemented sketch user interface and its visual appearance: (a) is a retrieval example based on a drawing query; and (b) is a retrieval example based on a freehand sketches query.
  • a feedback mechanism is implemented to support a coarse-to-fine retrieval process. Once some drawings are retrieved, users can begin a new round refinement by selecting a retrieved drawing and modifying it.
  • FIG. 31 shows two retrieval examples using the two proposed methods respectively. The robustness of the proposed methods is tested by analyzing the similarities among similar drawings with certain differences.
  • FIG. 32 shows several similar drawings and their descriptor histograms, and these drawings are listed from top to bottom according to their similarity.
  • the fourth column shows the 2D shape histograms of the corresponding drawings in the first column.
  • an edge enhancement procedure is conducted to detect the edges included in a drawing image. During the scanning process, noises are introduced unavoidably. From the histograms in FIG.
  • the Precision-Recall curve is the most common way to evaluate the performance of a retrieval system. Recall measures the ability of a system to retrieve the desired objects, while precision measures the ability of a system to weed out what users do not want. Definitions of precision and recall are illustrated in FIG. 33. To compare the performance between the methods presented herein and other methods, the methods proposed by Chen et al. and Funkhouser et al. are implemented even though the two methods are used to compute the similarity of the silhouettes between two 3D shapes. In addition, to demonstrate the difference between contour-based shape matching and drawing-based shape matching for 2D drawing retrieval, the methods presented herein are also used to extract the descriptor of the contour information of a 2D drawing for retrieval purposes.
  • FIG. 34 shows the precise-recall curves of these mentioned methods, in which "2DSH” represents the 2D shape histogram method, “2.5DSHT” represents the 2.5D spherical harmonics transformation method, “2DSH-Contour” represents the performance of “2DSH” when only the contour of a 2D drawing is considered, “2.5DSHT” represents the performance of “2.5DSHT-Contour” when only the contour of a 2D drawing is considered, “LF-Contour” represents the performance of the light-filed method proposed by Chen et al. when it is used to retrieve a 2D drawing, and “2DSHT-Contour” represents the performance of the 2D analog of the spherical harmonics proposed by Funkhouser et al. when it is used to retrieve a 2D drawing.
  • 2.5D spherical harmonics transformation method is better than 2D shape histogram method.
  • the 2.5D spherical harmonics transformation method has been found to be good at differentiating drawings with readily apparent structure shape, such as the retrieval example shown in FIG. 31(b) while the 2D shape histogram method is good at differentiating 2D drawings with similar contour but different inside structure, such as the retrieval example shown in FIG. 31 (a). Therefore, in practice, the two methods are provided together so that higher retrieval accuracy is achieved by allowing users to switch between the two methods interactively.
  • the 2.5DSHT has the best discrimination ability while the 2DSH is the second best method.
  • the 2.5D spherical harmonics transformation method Similar to the 2D drawing retrieval, the 2.5D spherical harmonics transformation method has been found to be good at differentiating models with readily apparent structure shape, such as table- or chair-like models, while the 2D shape histogram method is good at differentiating models with similar contour but different inside structure.
  • FIG. 36 shows some retrieval examples. The two proposed methods are both rotation invariant descriptors and provide a compact representation of a 2D drawing.
  • the shape matching problem is reduced to several simple steps, such as sampling, normalization, and distance computation between descriptors, and there is no need to determine a common pose and find feature correspondences between different drawings.
  • the 2.5D spherical harmonics method needs fewer dimensionalities (i.e., fewer signatures) than the 2D shape histogram method does.
  • the bandwidth is 64 in the 2.5D spherical harmonics method, i.e., the descriptor of a drawing contains 64 signatures.
  • the 2D shape histogram contains more than 200 signatures.
  • the 2D shape histogram method allows users to emphasize certain local shapes by specifying a high sampling rate upon these shapes, while it is difficult for the 2.5D spherical harmonics method to do this.
  • Other advantages of the proposed two methods are their simplicity and fast speed.
  • the general retrieval time is less than 0.1 seconds and the indexing process for 1,000 drawings is less than ten minutes since the I/O accessing is time-consuming.
  • the computing is done on a PC with 2.4GHz CPU and 512 MB RAM.
  • a user can enter two-dimensional input, such as a sketch or drawing, and can search a library of two-dimensional drawings and/or three-dimensional models in order to retrieve the drawing(s) or model(s) that most closely approximate the input.
  • Such input and the library information are compared in two-dimensional space, based on descriptors for two-dimensional drawing or projections.
  • the user can weight or emphasize one projection or feature in searching or refining a search.
  • the user can alter or edit a drawing or model found in searching so as to refine a search, and can choose features of search results so as to narrow the results further.
  • model or drawing information is entered by the user.
  • Box 110 refers to the calculation or computation of an appropriate bounding box for the inputted information, which may include a determination of an appropriate pose as discussed above.
  • views of the inputted model or drawing are generated using the bounding box and/or pose, which views may be full views, contour views and/or silhouette views.
  • Box 130 refers to similarity measurement, which may include determination of appropriate descriptors for the inputted information as discussed above and comparison of such descriptors to descriptors for models and/or drawings included in the accessed library. Once such a measurement or comparison is made, appropriate models or drawings are outputted from the library to the user's screen, to disk, to hard copy, or otherwise as the user may desire.
  • the steps may be implemented by hardware and/or software appropriately configured to perform them.
  • a library 200 having information of three-dimensional models and a library 210 having information of two-dimensional drawings may be provided. These libraries may be provided in one or more memory units or data storage apparatuses. Models from library 200 are analyzed and two-dimensional drawings are generated from them, as depicted at block 220. Generation of the drawings of a particular model may include determination of an appropriate pose of the model, e.g. in an attitude commonly viewed by or drawn by engineers, as depicted at block 222, and generating views of the posed model, e.g. along three orthogonal axes determined by the pose, as depicted at block 224. When drawings are generated of all desired models, the drawings can be collected in a library 230. In a particular embodiment, library 230 can be a part of or associated with library 210 of other two-dimensional drawings, as suggested in FIG. 3.
  • an input box is depicted a block 240.
  • Input can include sketches (block 242), two-dimensional drawings (block 244) and/or three- dimensional models (block 246). Sketches or other drawings can be given particular parameters, as indicated in block 250 prior to, at or after input into the system.
  • the inputted information can be edited by the user, if desired.
  • Three- dimensional model input can be posed (if necessary) and two-dimensional views are generated.
  • Block 260 depicts a method of creating descriptors for two- dimensional drawings, such as those methods discussed above. Descriptors can be created for drawings and models from libraries 200 and 210, as well as for two- dimensional drawing(s) inputted by the user.
  • the descriptor(s) of inputted drawing(s) are compared to those of library drawings or views of library models, as depicted in block 270, and the most similar are shown or outputted to the user, as depicted at block 280.
  • the degree of similarity required for output can be determined in advance by the user, as for example by a request to output the forty most similar models or drawings, or can be set in or by the system.
  • the retrieved results can be displayed as the drawings of library 210 or of the pseudo-two- dimensional drawing library 230, as depicted at block 282, or can be displayed as three-dimensional models as depicted at block 284. If the user desires, the set of output results can be further queried for particular features, or by searching further with a weighted or emphasized view as discussed above, or otherwise further analyzed.

Abstract

There are disclosed methods to provide stable pose determinations of various three dimensional shapes. Methods are also disclosed for determining multiple, unique drawing descriptors for two dimensional drawings, and for obtaining intermediate three dimensional representations of two dimensional drawings as one way to determine the descriptor. Methods are also disclosed to provide for searching of two dimensional drawings and three dimensional shapes using user-defined input, which may be a drawing or sketch. User interactivity is provided to further refine search results.

Description

METHODS FOR RETRIEVING SHAPES AND DRAWINGS
CROSS REFERENCE TO RELATED APPLICATIONS
The present application claims the benefit of United States Provisional Patent Application Serial Nos. 60/631,468 filed November 29, 2004, and 60/664,005 filed March 21, 2005, with both of said priority applications being incorporated herein by reference in their entirety. This application is related to "A Retrieval Framework Bridging the Gap between 2D Drawings and 3D Models" by Jiantao Pu and Karthik Ramani, "On Visual Similarity Based 2D Drawing Retrieval" by Jiantao Pu and Karthik Ramani, and "Similar 3D Shape Retrieval Based on Multi-Level Details" by Karthik Ramani and Pu Jiantao, all of which are hereby incorporated by reference in their entirety.
BACKGROUND
In the field of engineering, commonly drawings and models of parts or structures are used for explanation of how the apparatus works, for indication of how to construct the apparatus, or for other purposes. Such drawings and models may be graphically represented and stored in electronic versions, as in a computer system, and such systems may be used to operate on or analyze them.
Databases of such drawings and models may have value insofar as they provide the engineer building blocks for a project or otherwise provide information for the engineer. However, searching for and retrieving such information can be difficult and time consuming, and can provide results (e.g. parts or shapes) that are not what is sought by the engineer. BRIEF DESCRIPTION OF THE DRAWINGS
Although the characteristic features of this invention will be particularly pointed out in the claims, the invention itself, and the manner in which it may be made and used, may be better understood by referring to the following description taken in connection with the accompanying figures forming a part thereof.
FIG. 1 shows the projection of 2D images from a 3D model.
FIG. 2A and 2B are representations of an embodiment of a method using a 3D model projected onto a 2D plane. FIG. 3 is a schematic representation of an embodiment of a drawing and/or model analysis and retrieval system.
FIG. 4 shows a comparison of different posing methods. FIG. 5 shows a comparison of different posing methods.
FIG. 6 is a diagram representing four transitions to view generation, according to an example embodiment.
FIG. 7 shows orientations of an example 3D model. FIG. 8 shows examples of polygonal area arrangements. FIG. 9 shows an example of contact area calculation.
FIG. 10 shows embodiments of two different tessellation schemes, one along latitude and longitude and the other based on regular polyhedra.
FIG. 11 shows example orientation determinations.
FIG. 12 shows an example user interface with an example model and example drawings.
FIG. 13 is a flowchart showing an embodiment of a user-feedback method. FIG. 14 shows an example of extending a two-dimensional drawing into three dimensions.
FIG. 15 shows an example of extending a two-dimensional drawing into three dimensions. FIG. 16 shows a plot of efficiency and performance tradeoff.
FIG. 17 shows an example of a biased sampling of an example embodiment.
FIG. 18 shows the results of a retrieval discrimination evaluation. FIG. 19 shows the average precision-recall curves for different methods.
FIG. 20 shows a diagram of an example model. FIG. 21 shows a listing of example search results. FIG. 22 shows an embodiment of a model analysis and comparison method.
FIG. 23 shows an embodiment of a model generation and comparison method.
FIG. 24 shows an example user interface with an example model and example drawings.
FIG. 25 shows embodiments of an example user interface with sketches and retrieved results. FIG. 26 is a flowchart showing an embodiment of a user-feedback method.
FIG. 27 is a flowchart showing an embodiment of a user-feedback method.
FIG. 28 is a flowchart showing an embodiment of a user-feedback method.
FIG. 29 is a flowchart showing an embodiment of a user-feedback method.
FIG. 30 shows an embodiment of a drawing retrieval system. FIG. 31 shows example retrieval results.
FIG. 32 shows example descriptor results.
FIG. 33 shows the concepts of precision and recall.
FIG. 34 illustrates a retrieval discrimination evaluation.
FIG. 35 depicts the recall-precision curves for an example 3D model. FIG. 36 shows an embodiment of a user interface with sketches and retrieved results. DETAILED DESCRIPTION
For the purposes of promoting an understanding of the principles of the disclosure, reference will now be made to the embodiments illustrated in the drawings and specific language will be used to describe the same. It will nevertheless be understood that no limitation of the scope of the claims is thereby intended, and alterations and modifications in the illustrated devices, and further applications of the principles of the disclosure as illustrated therein are herein contemplated as would normally occur to one skilled in the art to which the disclosure relates.
The techniques and methods disclosed herein are, generally speaking, aimed at providing ways to effectively and efficiently analyze and access drawings and models, and in particular embodiments drawings and models of apparatus such as consumer products or machine parts. In these methods, particular emphasis is placed on proper posing of three-dimensional (3D) models, creating two- dimensional (2D) projections and/or descriptors of them that can be compared to other 2D drawings or descriptors, comparing input parameters to such models, projections and/or descriptors, and presenting models or drawings that most closely meet such input parameters.
The techniques presented herein enable a user to search similar models from a large repository of 3D models and 2D drawings using 2D input. As used herein for convenience of description, the term "2D input" is intended to encompass any 2D depiction including, but not limited to, sketches drawn by hand or with the aid of computer, drawings, flowcharts, diagrams, tablets, etc.
Engineers usually express their concept of a 3D shape in 2D views. This process can be illustrated by FIG. 1. Front view reflects the left-right and top-down relationships of shape of 3D models, top view the left-right and front-back relationships, side view the top-down and front-back relationships. By these relationships and views, engineers can manufacture the desired mechanical parts correctly without missing any information.
Among the areas addressed herein are methods of (1) determining projection plane and directions, (2) generating views, and (3) measuring similarity. Referring generally to FIGS. 2(a) and 2(b), one way to determine the projection planes and directions is to find a robust bounding box that is in accordance with a human's perception mechanism. The aim is to simulate the drawing process in which engineers express a 3D shape concept on a piece of paper. For general purposes, 3D polygon meshes are used to represent 3D models, and shape contours are needed for view generation. Once 2D views are obtained, the similarity between corresponding views can be computed.
General Description
An embodiment of an architectural arrangement for shape generation, orientation, and translation is shown schematically in FIG. 3. The components of the architecture are implemented in a machine-accessible and/or readable medium and are accessible over a network. The network may be wired, wireless, or a combination of wired and wireless. The various components and techniques include: (1) 2D drawing generation; (2) 2D drawing descriptors; (3) user queries; and (4) shape retrieval results.
(1) "2D Drawing Generation": This bridges the gap between 3D models and 2D drawings by representing 3D models in the form of three orthogonal drawing-like views. Two functions, 3D pose determination and 2D view generation, may be included. The first function unifies the space between 2D drawings and 3D models by finding a set of robust orthogonal orientations (a pose) along which designers usually depict 3D models in the form of 2D drawings. The second function sets up the correspondence between 2D drawings and 3D models by generating orthogonal drawing-like views or projections from 3D models along the pose orientations. Consequently, both 3D models and 2D drawings can be compared as 2D drawings. The dashed box in FIG. 3 illustrates this generation. A more detailed explanation is provided herein and below.
(2) "2D Drawing Descriptors": To describe the shape in a 2D drawing, descriptors are defined. Two embodiments of methods are presented to extract rotation descriptors of a 2D drawing. Both of these methods can provide a compact representation of a 2D drawing. These methods are efficient and have good discriminative ability and can be applied to vector drawings and scanned drawings. Since the two proposed methods are not insensitive to noise and the similarity measurements are conducted in 2D space, they can also handle the drawings formed by freehand sketches. A more detailed explanation is presented herein and below.
(3) "User Query": To retrieve 3D models and 2D drawings, three query methods are provided for designers, that is, by 2D drawings, 3D models, and freehand sketches. For example, designers can retrieve 3D models by inputting 2D drawing information, 3D model information, or freehand sketch information since the 2D space is the final unified space for comparison. It is easier for designers to express shape intention in 2D space than in 3D space.
(4) "Retrieved Results": The final retrieved results can be 2D drawings or 3D models, and they are ranked according to their similarities with the input query.
2D Drawing Generation
Turning now to the matter of posing, or orienting, a model or a 3D shape, there are various representations of such shapes or models, such as solids, parametric surfaces, point clouds and polygon meshes. A 3D object as a polygonal surface model with uniform mass distribution is one example considered herein. For smooth curved surfaces, the object can be approximated by multiple polygon patches.
On the basis of this representation, the pose determination step can generally be described as follows. Where a model is not built with CAD software and therefore does not have pre-defined viewing directions, such as with models built with digitizing tools such as scanners, we have to determine the desired pose that is in accordance with most designers' perceptions. Thus, given a 3D shape S represented by a series of polygons pt (O≤i≤n), a bounding box is found with three orthogonal axes (X, Y, Z) that encloses S and meets the criterion that similar shapes have similar bounding box orientations which should be in accordance with human perception. It is noted that the same bounding boxes with different axes mean different poses, i.e., given an object, the orientation with orthogonal axes (X, Y, Z) is different than the orientation with axes (-X, Y, Z). The criterion can be used to align similar objects correctly and assure similar objects have similar orientation. In a particular embodiment, a unique bounding box that encloses a three-dimension model tightly is found so as to determine projection planes and directions. The side-directions of the bounding box can be regarded as the projection planes, while the normals are the projection directions.
To find a bounding box, a technique dubbed maximum normal distribution (MND) can be used. As the name suggests, the maximum normal distribution (i.e. the greatest of the distributions of normals to polygons of a 3D shape) is used as one of the principal axes.
The polygons of a representation of a 3D shape are separable into triangles. As an initial matter, the normal direction N^ for each triangle Δp^qVis computed and normalized. It is the cross product of any two edges:
Figure imgf000009_0001
Second, the area of each triangle a of the polygonal representation of the 3D shape is calculated and the sum of the areas of all triangles with same or opposite normals are computed. Here the normals that are located in the same direction belong to a particular distribution.
Next, the three principal axes are determined. From all of the normal distributions, the normal distribution with maximum areas is selected as the first principal axis b". To get the next principal axis bv, we can search the remaining normal distributions and find the one that has maximum areas and is orthogonal to the first normal. Naturally, the third axis can be obtained by obtaining the cross product between b" and bv: b v = b" x bv .
Finally, the center and the half-length of the bounding box are calculated. As an example, the center and half-length can be defined by Equation (15) and half the longer diagonal length of the bounding box. In FIG. 4, the boxes shown in bottom row are obtained by the MND method. For the similar models shown in FIG. 5, the bounding boxes are consistent with each other.
Maximum normal distribution sorts the normals according to their direction, which can take a relatively long time since the normal is represented as a vector. Thus, for cases where the number of polygons contained in a model is large, an algorithm for bounding box generation without sorting the normal is provided. An approximate method with the time complexity O(n) can be used. It is based on eight 2D look-up tables with a space complexity O(tι ). Although just a 3D look-up table can be enough, the space requirement (O(/z3)) is too great to be acceptable if the precision is desired to be high. Generally, if the errors between three elements of some normal vectors are all less than 0.001 radians, then they are regarded as the vectors with the same direction. The basic steps of an embodiment of this algorithm are:
(1) Set up eight 2D look-up tables and initialize them as value 0. They are used for the eight coordinate districts divided by the three axes X, Y, Z respectively. Each table is represented by a 2D array with 1000x1000 discretization based on elements.
(2) Put the normals into different tables according to their values along three axes. The position is determined by multiplying 1000 to the elements along X, Y, Z axes respectively. The value is determined by the polygon area with the respective normal. If there are more polygons with same position in the 2D array, then their areas are summed together.
(3) Traverse the eight tables and find the element with the maximum values, and the corresponding normal is regarded as the first direction. After this, the value at this position is set to zero. Repeat this step and find the next maximum value that satisfies the constraint that its normal is orthogonal to the normal of the first direction.
(4) The result of the cross product of the above two normals is regarded as the third direction. The three directions can be regarded as the three directions of the bounding box.
In the above process, there is no sorting operation involved. The time complexity is O(ή) and the space complexity is O(n2).
For models without readily apparent normal distributions, as shown in FIG. 6, an Inertial Principal Axes (IPA) method can be used to obtain the bounding box. When there is no readily apparent normal distribution for a 3D shape, it will mean that the model has an arbitrary surface. It could be difficult to determine the projection directions and planes. In this case, one can use the mass distribution of the 3D shape to find a stable bounding box. The validity of this criterion has been verified by more than 1700 models in our 3D model library.
In another embodiment, an orientation determination method based on stability analysis can be used. It arises from the premise that a human being tries to place an object at the most stable orientation and similar objects will own similar stable orientations. This concept can be explained by the examples shown in FIG. 7. It is natural for human being to place the "cup" object in the orientation shown in FIG. 7(A), because this position is more stable than the one in FIG. 7(B). To determine the stable orientations, static equilibrium conditions and concepts of physical stability can be used to determine pose orientations. Given a 3D polygon model and a horizontal plane Pzero, there are three kinds of contacts between this model and the plane, as FIG. 8 shows. For the first two cases in FIG. 8(a) and 8(b), the contact area is the summed areas of all the contact polygons since they connect with each other. However, for the case shown in FIG. 8(c), the contact area is formed by the dispersed polygons with the horizontal plane. For the cases where the contact surface area is formed by several non-connected polygons, the area bounded by these polygons is defined as virtual contact area (VCA). For simplicity, regard the cases shown in FIG. 8(a) and (b) as special cases of FIG. 8(c).
For the sake of computation efficiency, an embodiment of VCA disclosed herein uses an approximating strategy. For generality, this problem can be represented as shown in FIG. 9. Since it is not easy to determine the region of VCA, use the average area of two bounding boxes to approximate the contact area. The first box is an axis-aligned bounding box A, whose corners are determined by the minimum and maximum coordinates along x and y axes:
A = {top = max{y t},lefi = min{x, }, bottom = πάn{yι ), right = max{xt} \ [xi,yi } e pm,0 ≤ m ≤ n}
(2)
The second bounding box is a quadrilateral B whose four corners pass through the centroids of the four outermost polygons. The outermost polygons are determined by the distances from the respective four corners of the bounding box A to the center of the polygons.
The final contact area C is approximated as the average of quadrilateral A and B:
area(A) + area(B) C = (3)
As noted above, generally a 3D shape S can be represented as a 3D mesh composed of a series of discrete 2D polygons p, ( 0 < i ≤ n -1 ) with uniform density. In order to efficiently figure out the contacted polygons of a 3D object along a certain direction, a triple S={ px | (N,, A1, D1), 0 < i ≤ n - 1 } is used to represent a 3D shape, in which N, represents the normal of polygon pv, A1 represents the area of polygon/?,, and D1 represents the signed distance between the mass center C and the polygon pt. In other embodiments, D1 can be the distance between any predefined origin in 3D space and the polygon pt. The mass center is adopted as the origin for the sake of simplicity.
To transform a 3D object S into a series of triple representations S= { p, \ (N,, A1, D1), 0≤ i <n-l }, the mass center C of this object as well as the normal N1, the area A1 and the distance D1 for each polygon p, is calculated. Given a polygon mesh, which contains n triangles defined as Δp,q,r,, where p,, q, and r, are the vertices of triangle i, 0 < i ≤ n -1 , if the area of triangle i is represented A1, then the total area of the convex hull can be denoted as AH = Y"1"1 A and its centroid is C1 = (p, + q, + r, )/3 . The centroid of this polygon mesh can be represented as
where A1 =0.5x|p,q, xr,q,| .
The normal N, is the cross product of any two edges:
N _ P,q, xr,q, (5) p,q, χ«;q, The signed distance D1 is equal to the signed projection of the vector from vertex p,, to the mass center C along the normal N,:
D1 = AC-N1 (6)
Thus, values for each normal vector N, and distance D1 are obtained.
To obtain the orientation along which the virtual contact area is the maximum, the polygons that have the same normal and same distance to the mass center are used to calculate the virtual contact area. The normal direction associated with the maximum virtual contact area is selected. This obtained normal is also the first principal axis b" of the 3D object orientation. The C style pseudocodes for this step are summarized in Table 1.
Table 1: Orientation determination algorithm
*1 for ( i = 0; i <n; i++ ) {
*2 TA1 = A1 D0 = D1;
*3 for (J = i;j < n;j++ ) {
*4 if ( ( N, = = N, ) && (( DO = = D,) ) {
*5 TA1 += A1;
*6 }
*7 }
*8 }
*9 }
*10 TAi1 = max { TA, , Q<i<n-l };
*11 Nfinat = Nk,
TAi is the summed area along the normal of the ith polygon. D0 is a middle variable. "*3~*8" compute the summed area TAi with the same normal and same distance. "*10" is to find out the normal associated with the maximum summed area and its computation complexity is O(n). It can be concluded from the above pseudocodes that the total computation complexity is O(2n+0.5n2). When an object contains too many polygons (e.g., 105~106), the computation load will be heavy.
To accelerate this process, a look-up table-based algorithm can be used with a computation complexity O(n+C), where C is a constant. As an approximating approach, this method divides the normal space NS = [(X1 , y, , Z1) 10 < X1 ≤ m,0 ≤ y, ≤ m,0 ≤ z, ≤ m} into eight different regions along the x, y, and z axes (or an octant), and each region is further divided into m subspaces with uniform intervals along the x and y axes. This process is similar to a tessellation process that divides a unit sphere along the latitude and longitude uniformly, as FIG. 10(a) shows. Each of the octants has m2 surface patches. The normals that fall into the same surface patch are regarded as the same normal. Therefore, instead of comparing every two normals of all the polygons, all the normals are classified by traversing each polygon and checking which surface patch it belongs to. In this process, these patches act as a look-up table. This modified algorithm is summarized in Table 2 in the form of C style pseudocodes. Table 2: An improved algorithm based on a look-up table
*1 Initialize( NormalSpace[8][m] [m] );
*2 for ( ( = 0; i <n; i++ ) {
*3 row = (int) N,-xxm; col = (int) N,.;yxm;
*4 D0 = D1;
*5 Bin = WhichOctant(N, );
*6 if ( DO = = Dj) {
*7 NormalSpace[Bin][row][col] += A1;
*8 }
*9 }
*10 max
*11 N/frni
Figure imgf000015_0001
);
"*1" builds a look-up table that represents the eight discrete normal space NormalSpace[8] [m][m]. in is the dividing number along the x and y axes for each region of the octant. "*2~*9" check in which surface patches a normal lies and summarize the areas with the same normal and same distance to the mass center. The computation complexity is O(n). "*10" searches from this look-up table and finds the surface patch with the maximum area. The computation complexity is O(Sm2). From this patch, determining the region in which this patch is located tells us the signs of the x, y and z values of the normal. It is worth noting that for each region only discretize the x and y values, as "*3" shows, because the z value can be derived from the x, y and the region (as "*H" shows). The total computation complexity is 0(n+8m ). Total computation complexity depends on the precision along the x and y axes. If the precision is fixed, the computation complexity is almost the same for all objects with different complexity.
In this tessellation scheme, the discrete patches that constitute the sphere surface are not uniform. However, with the improved division resolution is provided so that the divided patches on the sphere will become more similar to each other. A division number of 1000 can be used. The scheme can be used because it is easy to decide in which cell a particular surface normal locates and the projection computation involved in the regular polyhedron based method is not needed.
To get the next principal axis bv of an object orientation, search the lookup table again and find the normal that satisfies two conditions: (a) with maximum areas; and (b) orthogonal to the first principal axis. Naturally, the third axis can be obtained by doing cross product between b" and bv:
bιv =b" xbv (7)
To evaluate the validity of the VCA method, it was tested against 3D models from industry and Princeton's benchmark for 3D model retrieval. At the same time, some comparisons with some other methods (including the Principal Component Analysis (PCA), Extended Gaussian Image (EGI) and Complex Extended Gaussian Image (CEGI) methods) were conducted. In FIG. 4, (a)~(e) show the orientations for a set of similar objects which are obtained by the method disclosed herein. It is seen that similar models have similar orientations. However, (f)~(j) are the orientations obtained by the PCA method, and they differ greatly although their shapes are similar.
The methods disclosed can retrieve orientations more similar to human perception than the EGI & CEGI methods. For example, in FIG. 5, the top row shows the results obtained by the method disclosed, while the bottom row shows the results by the EGI & CEGI methods. The reason lies in that the introduced concept named VCA can incorporate these structural shapes into one uniform framework with the help of stability analysis.
In FIG. 11, some examples from different domains are presented to show the generality of this proposed method. Some objects are formed by different parts, such as (a), (e) and (f). The plant object in (d) shows the robustness of this method since the leaves can be regarded as some noise.
On the basis of these experiments, it is seen that:
(1) Similarity: methods disclosed herein can find orientations of a 3D model which is in good accordance with human perception and it can obtain similar orientations for similar shapes.
(2) Robust: the methods are not sensitive to small local shapes or noises. For example, if a line of the cash machine is changed in FIG. 1 l(b), the orientation does not change respectively. (3) Generality: these methods can be used to calculate the orientation of a 3D model from many domains.
(4) Efficiency: the total computation complexity is O(n+fn2), in which n is the number of the polygon contained in an object and m is the division number along x and y axes.
PROJECTION AND DESCRIPTORS
Turning now to the matter of projecting 2D images of 3D models onto the planes of a bounding box, the aim is to create multiple 2D representations of a given 3D model. One such representation is analogous to a type of representation engineers use to represent 3D objects using 2D views. This type of representation can be referred to as the full view and is one in which all the features, outside as well as hidden, are displayed. Another type of representation can be used to display only the outside features; this type can be referred to as the silhouette view. Yet another type of view, called the contour view, can be used to depict the edges of the shadow cast by a 3D model. At the contour level, there are three different views along the principal axes; at the silhouette level, there are six different views; and at the full level there are three different views along the principal axes. The representations of contour, silhouette and full level can be determined with the aid of a CAD system using predefined coordinate axes, or could be machine determined using the techniques described. Drawings and models in a CAD system typically have axis systems predefined by a user, and if no such axes exist then the disclosed methods can be used to define the axes. For example, a point cloud may have an arbitrary axis system that can be rotated to provide new axes using the techniques described. After the axis system has been rotated the contour, silhouette and full levels can then be determined.
Turning now to a description of the drawing descriptors, there is described a statistics-based 2D shape distribution that is invariant to the scale, rotation, and orientation of 3D models. The problem of 2D drawing retrieval is related to 2D shape recognition, and can be defined as: given a drawing A and a drawing library L = [B1 10 < i ≤ n} , how to compute the similarity distance A and B1, i.e., D(A, S1), and find the ^-nearest drawings within a certain tolerance ε.
In one embodiment, after steps are taken to pose and project images of the 3D model, using methods such as those described above, the 3D shape-matching problem is transformed into how to measure the similarity between 2D views, which can be illustrated by FIG. 12. A two-dimension shape distribution method is presented herein to measure the similarity between 2D views. It can be regarded as a kind of derivation from the 3D case. In other words, the similarity between views can be obtained by measuring their 2D shape distributions. Like the 3D case, one embodiment of a process to compute the degree of the similarity between 2D shapes can be summarized as three steps.
One step is random sampling on view edges. The views are formed by a series of line segments. Some of them may overlap with each other. For the sake of convenience, adopt a random sampling principal: select a line segment from the view randomly, then pick a point on the line segment randomly and save it into an array named S. During this process, the random generator plays an important role. It is designed with the ability to generate random numbers greater than one million because we define one million samplings. But the system function randQ in windows platform can only generate numbers less than 32768, and therefore a new random generator is designed by using randQ twice: MyRandQ = randQ x 32768 + randQ.
Another step is shape distribution generation. The Euclidean distance between two random sampled points is chosen to measure the shape features of polygons because other distance metrics are designed specially for 3D cases. By summarizing the numbers of point pairs with same distance, the 2D shape distribution can be generated. FIG. 13 shows the two distributions formed by the views in FIG. 12. From the visual appearance, the two views are different greatly. The next step is to quantify this difference.
A final step is similarity measuring. Due to the fact that two different models may be of different size, a normalization step has to be taken to measure their difference on the basis of one common standard. Generally, two normalization methods are available: (a) align the maximum D2 distance values, and (b) align the average D2 distance values. For the first normalization method, the maximum values of the two shape distributions have to be adjusted to one same value, which is used to normalize the shape distribution. The other one is to use the mean value of distance to normalize the shape distribution. To alleviate the influence of high- frequency noise, the second one is adopted as the normalization standard. The similarity between two views can be obtained by calculating the difference between their distributions in the form of a histogram. The Minkovski distance is used due to its simplicity, which can be expressed as
Similarity = ∑(s, - k, f (8)
where n is the divided histogram number of the shape distribution curve, s, and k is the probability at certain distance. The 2D shape distribution approach has the same advantages as the 3D case. It is simple and easy to implement, and it also has some unique properties which are: (1) insensitivity to geometric noise; (2) invariance to translation, rotation and scaling; and (3) lack of necessity to find out the feature correspondences between models.
To measure similarity between models that have multiple ortho-views, an additional step is needed. Its aim is to find out the correspondences between views of two models. If the view generation step is carried out without the step of culling the occluded triangles, then there are only three different views because the views generated from positive and negative directions are the same. If the step of culling the occluded triangles is taken, then there are six different views in which the projections along different directions are not the same because the internal structure is not taken into account. To determine the partnership of one view, compare it with all the views of another model and select the most similar one as the corresponding view. In this way, the views from different models can be grouped into a series of pairs. By adding the similarities of these view pairs together, the similarity between models can be obtained. Experiments below show that the method with the step of culling the occluded triangles can achieve a better retrieval performance. Embodiments for two methods for computing the shape similarity between
2D drawings are disclosed. The first approach represents a drawing as a spherical function by transforming it from 2D space into 3D space and then employs a fast spherical harmonics transformation to get a rotation invariant descriptor. The second method represents the shape of a 2D drawing from the statistics perspective as a distance distribution between two random sampled points.
As a robust rotation invariant descriptor, spherical harmonics representation can be successfully applied to 3D shape matching. It arises on the sphere in the same way that the Fourier exponential function arises on the circle. According to the theory of spherical harmonics, a function f(θ,φ) represented in a spherical coordinate can be approximated with a sum of its spherical harmonics Y"'{θ,φ) :
fφ,φ) = γj γj aιj;»{θ,φ) (9)
/=0 m=-l where { a, ,„ } are the coefficients in the frequency domain, Y"'(θ,φ) are the angular portion of the solution to Laplace's equation and defined as
*™-f£ -'»**" (10) where P1 m (x) is an associated Legendre polynomial. If f(θ,φ) is a spherical function with bandwidth B, then Equation (32) can be rewritten as
Figure imgf000020_0001
where ft(β,φ) can be regarded as a component of f{θ,φ) with frequency /. In other words, Equation (11) is an energy representation of the spherical function f(θ, φ) . f, φ, φ) has a valuable property: rotating a spherical function does not change its L2 norm, i.e., its energy as represented by Equation (12) is a rotation invariant.
Figure imgf000020_0002
Therefore, by applying spherical harmonics transform to a spherical function representing a 3D shape, we will get a set of rotation invariant descriptors for this shape. The similarity between two shapes whose spherical functions are/ and g can be measured by Equation (13).
0(/.S) = Z(Il /, IHk II)2 (13) To make use of the valuable properties of the spherical harmonics, a strategy dubbed 2.5D spherical harmonic representation can extract a series of rotation invariants by transforming a 2D drawing from 2D space into 3D space uniquely. The name "2.5D" arises from the fact that a 2D drawing is represented in a 3D space. The transformation is explained by the following steps.
(1) Given a 2D drawing D (e.g., FIG. 14(a)), compute its axis-align bounding box B as FIG. 14(b) shows according to Equation (14).
B = {left - πάn(D(x)),top = nήn(D(y)), right = max(D(x)),bottom = max(D(y))} (14) where {D(x), D(y)} represents the points composing a 2D drawing. (2) Determine a sphere S that satisfies the following three conditions:
Its center c is in accordance with the center of bounding box B, i.e., Equation(15).
B. left + B.right B. top + Bbottom ( . ~, c 2 ' 2 y*--*)
Its radius r is equal to half the longer diagonal length of bounding box B. The purpose is to ensure sphere S can enclose 2D drawing D completely. As described later, the spherical radius is also used for normalization. The 2D drawing lies in the equator plane of sphere S. The obtained sphere is shown in FIG. 14(c). For the sake of simplicity, we can position this sphere into a coordinate system xyz. The sphere center locates at the origin and the equation plane lies in the xy plane.
(3) Generate a set of rays uniformly, which start from the sphere center c and locate in plane xy where the 2D drawing lies, and compute the intersections between these rays and 2D drawing D. The resulting intersection point set {/?,•} can be regarded as an approximating representation of 2D drawing D., as FIG. 14(d) shows. Since the intersection points distribute along certain angles θ with respect to axis x, they also can be represented by θ and dj, i.e., pi=ββu di), where di is the distance between point pi and the sphere center c. However, along a single 0,-, there might be multiple intersection points. To make use of the valuable property of the spherical harmonics transformation, we transform all intersection points {pi=ββu di)} into a spherical function form {/?,=/(#/, φt, di)} by introducing a new variable ψi. To ensure each intersection point pi corresponding to a unique (#,-, φt), a simple transformation can be used as Equation (16) shows to determine φt. #>, = arctan — L (16)
where r is the radius of sphere S. For a given drawing, the r is determined uniquely, while for an intersection point/?,, dt is also uniquely determined. For an intersection point /7,, the corresponding φt obtained by Equation (39) is unique. Therefore, a 2D drawing is uniquely transformed into a 3D spherical representation, i.e., the correspondence between a 2D drawing and its spherical function is one to one. This process is dubbed a 2.5D transformation, and FIGS. 14(e)~ 14(g) show the final 3D representation of the drawing in FIG. 14(a) from different perspectives. In fact, the proposed 2.5D representation transforms a 2D drawing by elevating and projecting it on the surface of a cylinder. FIG. 15 shows an example about this transformation. From this example, notice that the geometric information is represented clearly in 3D space along the surface of a cylinder. To get the rotation invariants as Equation (12) shows, a fast spherical harmonics transformation method in which a spherical function of bandwidth B is sampled on the 2B-many Chebyshev points and not the B-many Gaussian points can be used. These sampled points form a 2Bx2B equiangular grid along the longitude and latitude of a sphere, i.e., the sampling nodes { φ,,φt) } on this equiangular grid are
0, = (/ + 0.5)—
2B ι,j = 0,1,2,- ■ - , 2B -I (17)
Figure imgf000022_0001
According to this sampling requirement, the ray casting process mentioned above should be conducted at a sampling rate 2B along the longitude direction. After the proposed 2.5D transformation is finished, Equation (18) can be used to decide at which Chebyshev node (i,j) a sample (θt, φt) locates.
i,j = 0,1,2,- ■ -,2B -I (18)
Figure imgf000022_0002
To represent the shape at Chebyshev node (i, j), a simple way is to use the distance dt. Therefore, a 2D drawing D is represented by a function defined at Chebyshev nodes, i.e., a 2Bx2B equiangular grid along the longitude and latitude of a sphere: D = {di = f(i, j) \ iJ = 0,l,2,-,2B - l} (19)
However, different drawings usually have different sizes. If two drawings with the same shape have different sizes, then their {d(} will be different. Therefore, before the fast spherical harmonics transformation is conducted, a normalization step is needed. A way to normalize a 2D drawing is to normalize the longer or shorter edge of its bounding box by a predefined value (e.g., V). The normalization process is expressed as scale d- (20)
[D = {d, X scale = /(/, j) | /, j = 0, 1, 2, ■ • • , IB - 1} where r is the radius of the sphere mentioned above. A fast spherical harmonics transformation can be imposed upon the spherical representation of a 2D drawing with a bandwidth B as Equation (20) shows. For each frequency, a rotation invariant descriptor will be obtained according to Equation (12) and the similarity between 2D drawings is measured according to Equation (13). This proposed method avoids one-to-multiple correspondence and the instability caused by shape perturbation, and thus obtains a set of robust rotation invariant signatures for a 2D drawing.
It is known that a small value of B can be as a low-pass-filter and may miss some details, while a larger value of B will take into account small details and need more computational resources. To determine a better balance point, use the inverse spherical harmonics transformation to check the precision under different bandwidths. Simple computation shows that when B is equal to 64, the precision is almost 5 xlθ~3 . The precision is enough for the purposes of 2D drawing retrieval.
In another method to measure the similarity between 3D shapes, a 3D shape has been represented as a signature named shape distribution that is formed by random points sampled uniformly from the shape surface. In this embodiment, a 2D shape distribution analog is derived. Experiments show this derivation is good at computing the similarity between 2D drawings, and it also allows users to emphasize local shapes by adjusting sampling strategy. This derived 2D shape distribution method will be described in detail as follows. A 2D drawing is usually composed of some basic geometric entities, such as lines, circles, and arcs. For later sampling purposes, a discretization process is adopted to transform all entities contained in a drawing into a set of line segments. In this way, a 2D drawing S can be represented as
S = {((*,,?,),(*„,, X+1)) I O ≤ i ≤ n-1} (21) where n is the total number of the line segments included in stroke S, (x/, yi) and (jt/+i, yι+\) are the two ending points of a line segment. Particularly, for a scanned drawing, it can be represented directly by a set of points, and the later sampling process can be replaced with an edge-enhancing procedure (e.g., Canny edge detector).
To ensure that the sampling process is conducted efficiently and uniformly, a look-up table-based algorithm has been designed.
A first step is to compute the summed length of all line segments included in stroke S. When each line segment is added, the summed length is saved into table T with size n, where n is the total number of the line segments. Table T can be represented by a linear array as Equation (22) shows. T = {*, \ t, = ∑ L((Xj , y, ), (X1+1 , yJ+l)),0 ≤ i ≤ n -l} (22)
where L is the Euclidean distance between two points.
A second step is to generate a random real number r between 0 and the total length tn.j, and then use the well-known binary-search algorithm to find out the position m where r locates in the table. This found position corresponds to line segment ((*,„, )>„,), U1n+1, ;yra+1)) . A third step is to generate a random real number I between 0 and 1. According to Equation (23), we can get a sample point (xL, yL) and save it into an array A.
xm) /93%
Figure imgf000024_0001
ym)
Repeating the above second and third steps for 2xN times, we can get N point pairs that are sampled in an unbiased manner.
In the sampling procedure, we have to consider two problems: sampling density and sampling method. From the perspective of statistics, more samples will approximate the original shape more precisely and also need more computing resources (e.g., memory and time). Thus, there is a tradeoff between efficiency and precision. FIG. 16 shows experimental results considering this tradeoff. The horizontal axis represents the sampling density, while the vertical axis represents the time cost of sampling procedure or the differences under different sampling densities. It can be concluded from the curves that for a 2D drawing, 105 sampling point pairs are enough to achieve a better balance between precision and efficiency.
Once enough random point pairs are sampled, the next step is to build the corresponding distance histogram which is described by a shape function. In our 2D drawing retrieval system, we adopt D2, i.e., the Euclidean distance between two points, as the shape function. Since 2D drawings usually have different geometric sizes, a normalization process is needed to account for this difference. A distance histogram can be built and normalized in this way.
First, define a division number h for the distance histogram, i.e., a distance histogram H consists of h parts with uniform distance.
Second, determine a standard value N used for normalization. Generally, there are two simple ways to find such a value as Equation (24) shows. The first one uses the maximum distance among all sampled point pairs as the standard value. The second one uses the average distance of all sample point pairs as the standard value.
Figure imgf000025_0001
Third, calculate the distances of all sample point pairs and count how many sample pairs fall into each part of distance histogram H(z), where O≤i≤h . By traversing each point pair (χ,,yι),(χM,yM) in array A, the resulting histogram H can be built by Equation (25).
H(O+ = I, ^ P2((x-'y-)'(^"^)) , o ≤ ι ≤ m-1 (25)
N Some shape histogram examples of 2D drawings are shown in FIG. 17 and FIG. 32.
The shape histogram generated by a uniform sampling strategy reflects the global geometric properties of a 2D drawing. In practice, users frequently would like to emphasize local shapes for retrieval purpose. To support such retrieval attention, a biased sampling strategy has also been implemented: users are allowed to specify a higher sampling rate on their desired local shape to emphasize the desired local shape. For example, two similar drawings and their shape histograms are shown in FIG. 17(a)~ 17(d) respectively. For the drawing in FIG. 17(a), if users want to emphasize the local shape composed by the rectangle and the big middle circle, they can super sample them interactively. When the super sampling rate of the local shape composed of the rectangle and the big middle circle changes from 200% to 500%, the corresponding histogram becomes more similar to the histogram of the shape shown in (b).
In our prototype system, Minkovski distance Ln is used because of its simplicity. Therefore, for two histograms, i.e., H1 and H2, the similarity V7is
W(H1M2) = L11(H1M2) = sj∑(Hι(i)-H2(i))1/n (26)
where h is the dividing number of a histogram. For 2D drawings, Equation (26) can be used to compute their similarity directly. Since a 3D model is represented by three orthogonal drawing-like views, a procedure is needed to find the correspondence between drawing-like views from different models. To simplify this process, the minimum values between all possible view-pairs can be used to represent their similarity:
2 W(M1M2) = ∑MIN{vF(m1,,m2;) | 0 < z < 2,0 < j ≤ 2} (27) k=0
Where mh represents view i of model M1, m2j represents viewy of model M2, (mυ,m2j) represents a view-pair between 3D models. It is worth pointing out that a view-pair can not be computed twice.
Given the approaches described, it is possible to combine them to achieve better performance. To make sure that two different approaches can be applied to the whole 2D drawing space, a weight value to each method can be used along with using their combined confidence to measure their similarity. Given a 2D drawing, its similarity confidence T using the two approaches described can be represented as
T = wsCs +wdCd (28)
where Cs is the similarity obtained by 2.5D spherical harmonics method, Q is the similarity obtained by 2D distance histogram method, and ws and wd are the weight values of the respective methods. Higher weight value means that the corresponding method plays a more important role in differentiating a 2D drawing. In FIG. 18, the "Combination" represents the combination of our two proposed methods using equal weights, i.e. (0.5, 0.5). From this precision-recall curve, it can be seen that this combined approach has the best performance.
To determine the best combination of weights for the two proposed methods, a test was performed. Since there is a single independent weight ws ( wd = 1 - ws ), the weight was changed from 0 to 1 in increments of 0.1. From the experiments it was observed that increasing the weight ws of the 2D distance histogram method (2DSH) improved the average performance for the entire database, however there was a marked improvement in performance when increasing the weight beyond 0.3, yielding the best performance at weights (0.3, 0.7), i.e. the weight of the 2D distance histogram method is 0.3, while the weight of the 2.5D spherical harmonics method (2.5DSHT) is 0.7. FIG. 19 illustrates this trend curve for different weight combinations. The horizontal axis represents the weight changes of the 2D distance histogram method. The default weights in our system can be set to (0.3, 0.7) while allowing the user to change the weights for different queries.
The above mentioned 2D descriptors can be used on any 2D drawing, projection, sketch, etc. to create a database of 2D descriptors associated with drawings or models and are capable of being searched.
USER INTERFACE
Many applications for this disclosure will be recognized. One example is for searching using CAD sketch and 3D models. The CAD sketch is quickly mapped to the data structure presented above and can be utilized as the input to search 3D models. When a user opens a model in a CAD system, the search system can quickly generate 2D views using the underlying technology or even using the drafting functionality of CAD system. As long as the model is represented as 2D views, they can be used to search as the sketch created in our free-form sketch interface. FIG. 20 and FIG. 21 illustrate the integration of our search system with Unigraphics. FIG. 22 illustrates the pipeline of taking 3D models as input, convert into 2D views, and then use 2D algorithms to search similar 3D shapes. This is an approach from 3D to 2D and back to 3D.
Another example is for converting 2D sketches and/or 2D drawings and using 3D shape search technology. The underlining technology can take 2D sketches, images, and/or drawings and convert them to 3D models. The 3D models can be compared and searched, for example by using the disclosure above. FIG. 23 illustrates the pipeline that takes 2D input, converts it into 3D models, and uses a 3D comparison and search algorithm. It is noted, that the search system presented herein is very easily integrated with any CAD systems. A user can use either the sketch or 3D models created by the CAD system to query a database. In this case, the 3D models are quickly converted into 2D views using the methods and techniques described above. One application for this proposed method is sketch based user interface, in which the query process is similar to what engineers use to represent 3D shapes on a piece of paper. FIG. 24 shows the user interface of our prototype system in which users can express their concept freely. Their emphasis on some views can be realized by adjusting the weights.
The sketch based user interface allows users to express their attention freely. People tend to sketch objects with fragmented boundary contours and few other lines that are not very geometrically accurate. The most frequently chosen views are not characteristic views, but instead ones that are simpler to draw (front, side, and top views). Users can also specify weights to emphasize certain views of the shape. In this way, the similarity expressed in Equation (8) can be modified as
Similarity = ∑wι(sι -klf ∑ w, = 1 (29)
;=0 where w,- is the weight of view i, the other parameters are the same as Equation (5). If one view has higher weight, then the shape that it describes will play a more important role to determine the similarity degree between two models.
The retrieval process is robust for snatch hand-drawn input. The similarity measurement is based on a statistical method, in which a large amount of points will be sampled on the edges of input images. This process is insensitive to noises. Therefore, irregular or snatch input will not influence the retrieval performance greatly.
One can also search with a rough sketch and refine with drawing. Usually, it is not feasible for users to draw the desired shape in detail, and users would like to hand draw just a few sketches to retrieve the desired models. To do this, we provide a kind of feedback mechanism for users. There are several ways that users can provide their feedback.
FIG. 25 shows the framework of our implemented sketch user interface and its visual appearance: (a) is a 3D model retrieval example using freehand sketches; and (b) is a 2D drawing retrieval example using freehand sketches. In fact, users are also allowed to retrieve 2D drawings or 3D models using a 2D drawing or a 3D model.
One system provides several feedback ways for users to interactively refine retrieval results. One method is to provide weight value adjustment for orthogonal views. A 3D model may be described by three orthogonal views. Different views reflect certain shape characteristics from different perspectives. To find similar shapes with certain features, users are allowed to emphasize certain views by adjusting their weights as indicated above. This interaction is illustrated by FIG. 26. In this process, after some coarse results are retrieved, users can refine the results by assigning a higher weight to the views that play a more important role in the desired shape. The larger the weight value is, the more important the view will be. Users can repeat this process to refine the searched results.
Another method provides for initial sketch input editing. In some cases, just adjusting weight value is not enough because the initial sketch is not sufficient to represent the desired models and sometimes users make mistakes. Enabling a user to edit the initial sketch input is an efficient way to refine the search results, especially when the searched results can be displayed dynamically along the modification of the sketch. This interaction can be illustrated in FIG. 27. Users can modify the input sketches according to the retrieved results.
Yet another method provides for interacting with the retrieved results. In the two interactions described previously, retrieved results are used to help users to check and modify the improper input. A third approach can also be used to edit the views of the retrieved 3D models or 2D drawings to improve and refine the retrieval results. As described above, the 2D views of a 3D model can be generated automatically. Therefore, users can input a simple coarse sketch first and find similar 3D models as another round sketch input. Based on the generated views, users can edit it according to their shape concepts. Because the generated views are more regular than the hand-drawn ones, this kind of feedback interaction is expected to achieve better performance. The feedback process can be illustrated by FIG. 28. In contrast with the previous feedback mechanism, this feedback mechanism allows uses to edit the views of the retrieved models.
In yet another method, users can provide an initial input and examine the results of the search. A user then can either refine the results given using the techniques described for example, or can provide yet additional input to change the nature of the search. For example a user could jump to another shape after examining the initial search results or one could input a new or changed parameter to obtain a subset of the set of results more closely meeting the user's needs and/or arranged in a hierarchy according to the user's needs. As used herein, the term "parameter" can include, among other data or information, a 2D or 3D representation of all or part of an object. A user could also provide another sketch input for the system to search or cross-reference among prior search results. A user could also provide additional details to the initial sketch, or erase existing features, and conduct the search anew or obtain a subset of the set of results more closely meeting the user's needs and/or arranged in a hierarchy according to the user's needs. For example, the user could add an aperture or erase a line and conduct the search again. Other methods of changing the direction and character of the search are contemplated. With the above feedback mechanisms, users can retrieve 2D drawings or 3D models in a coarse-to-fine way. Generally, it is not easy for users to find proper weight values or modify the sketches just within one time. Users may have to repeat this process several times. To accelerate this process, a dynamic retrieval mechanism is provided, as FIG. 29 shows. A user's behaviors are monitored by the system all the time. During the feedback-based interactions, once an operation is finished, the retrieval results will be dynamically refreshed so that a user can experience the impact of his operation upon the retrieval results. In this way, users can get instant feedback and adjust their operations efficiently. The shape can be represented at global and local level. The global representation is the extracted based on the overall information without considering the features of the sketch or view. At the local level, features are extracted using feature recognition methods and representations are generated based on the relationship of different features. The following figures show the shape distribution of the overall view and those of features identified from the view.
In order to improve the search efficiency, the local features and models are indexed. The index is built on top of the commercial database, such as Oracle, SQL server, and Sybase. The search algorithm uses overall representation first and then uses local features to improve the search results. Different feature vectors are extracted and stored in the database. Therefore, after the search results with one feature vector, a user can refine the search results with other feature vector. In other words, using multi-step search methods improve the effectiveness of the search. In the area of searching, it is also possible to build a searchable database of
3D models and 2D drawings that includes not only the shape descriptors described, but also other characteristics of the models or drawings, such as size, shape, weight, or any other types of performance characteristics like horsepower. In operation, when a user provides input to the system as described above and the system returns results of the search, the system may also return the other characteristics of the models or drawings. It is possible to further refine the results by combining the techniques described above with further refinements of the associated characteristics such as size, weight, etc. In one embodiment, a library is built to contain an arbitrary number of 3D models or 2D drawings. The library can contain a mixture of models and drawings as well as other information. For example, the shape descriptor operators mentioned above can be used to generate a descriptor to uniquely identify each of the models or drawings; these descriptors can then be associated with the particular models or drawings that the descriptors were derived from. A user provides input to the system which can be in the form of a model, drawing or sketch input at which point the system determines a unique descriptor for the input as described above. As will be understood, the term "unique descriptor" or "descriptor" can refer to a single value, or a set of values, that are associated with a given drawing, model, or input. After determining the unique descriptor of the user input, the system can then search the library for a model or drawing descriptor that approximates the input using the methods described above. The system then returns a candidate model or drawing, or a set of candidate models or drawings, that are similar to the descriptor of the user input. The system can further provide methods described above that provide interactivity to assist the user in further refining the search results.
It is also possible to implement the above methods wholly in software, or a combination of software and hardware. Experiments and Discussion
The two methods introduced in the preceding sections have been incorporated into a 2D & 3D shape retrieval system called ShapeLab. In order to test the performance of the two methods, we have built a benchmark which includes 2,000 2D drawings from industrial fields. These drawings are classified into 50 clusters from simple to complex according to their functions and geometric shape. Following, we will introduce our implemented retrieval system, i.e., ShapeLab, and present some experimental evaluation results. At the same time, a comparison between the two proposed methods is given.
As the experiments above demonstrate, the two methods proposed herein are robust enough to compute the similarity between sketches and are non-sensitive to scratchy input. Therefore, a sketch-based user interface supporting 2D drawing retrieval is implemented. The retrieval process is similar to the process in which engineers express their shape concept on a piece of paper. FIG. 30 shows the framework of our implemented sketch user interface and its visual appearance: (a) is a retrieval example based on a drawing query; and (b) is a retrieval example based on a freehand sketches query. In this system, a feedback mechanism is implemented to support a coarse-to-fine retrieval process. Once some drawings are retrieved, users can begin a new round refinement by selecting a retrieved drawing and modifying it. Since the retrieved drawings are more regular and precise than the hand-drawn sketches, this kind of feedback interaction can help users to find the desired drawings interactively and efficiently. FIG. 31 shows two retrieval examples using the two proposed methods respectively. The robustness of the proposed methods is tested by analyzing the similarities among similar drawings with certain differences. FIG. 32 shows several similar drawings and their descriptor histograms, and these drawings are listed from top to bottom according to their similarity. In FIG. 32, the fourth column shows the 2D shape histograms of the corresponding drawings in the first column. For a scanned drawing, an edge enhancement procedure is conducted to detect the edges included in a drawing image. During the scanning process, noises are introduced unavoidably. From the histograms in FIG. 32, we can notice some phenomena: (1) as the first four drawings show, for similar drawings, their descriptor histograms are similar, i.e., small shape perturbations do not lead to great difference between similar drawings; (2) As the six drawings show, when the difference between drawings is increased, the difference between their histograms is also increased respectively; and (3) as the histograms of the scanned drawings show, the information missing due to digitization or noise has no readily apparent impact on the final descriptor histograms. From these examples, it can be concluded that the proposed methods are both robust against noise and small changes in local shapes. In addition, because they can be applied both to vector drawing and scanned drawing, they also have a good generality.
The Precision-Recall curve is the most common way to evaluate the performance of a retrieval system. Recall measures the ability of a system to retrieve the desired objects, while precision measures the ability of a system to weed out what users do not want. Definitions of precision and recall are illustrated in FIG. 33. To compare the performance between the methods presented herein and other methods, the methods proposed by Chen et al. and Funkhouser et al. are implemented even though the two methods are used to compute the similarity of the silhouettes between two 3D shapes. In addition, to demonstrate the difference between contour-based shape matching and drawing-based shape matching for 2D drawing retrieval, the methods presented herein are also used to extract the descriptor of the contour information of a 2D drawing for retrieval purposes.
FIG. 34 shows the precise-recall curves of these mentioned methods, in which "2DSH" represents the 2D shape histogram method, "2.5DSHT" represents the 2.5D spherical harmonics transformation method, "2DSH-Contour" represents the performance of "2DSH" when only the contour of a 2D drawing is considered, "2.5DSHT" represents the performance of "2.5DSHT-Contour" when only the contour of a 2D drawing is considered, "LF-Contour" represents the performance of the light-filed method proposed by Chen et al. when it is used to retrieve a 2D drawing, and "2DSHT-Contour" represents the performance of the 2D analog of the spherical harmonics proposed by Funkhouser et al. when it is used to retrieve a 2D drawing.
From this precise-recall curve, it is readily apparent that the four contour- based retrieval methods have the lowest performance. Therefore, it is safe to conclude that the contour is not a good way to describe the shape of a 2D drawing. Two of the proposed methods have almost the same performance on the whole best performance among these methods ("A retrieval framework). In strict sense, the
2.5D spherical harmonics transformation method is better than 2D shape histogram method. In practice, the 2.5D spherical harmonics transformation method has been found to be good at differentiating drawings with readily apparent structure shape, such as the retrieval example shown in FIG. 31(b) while the 2D shape histogram method is good at differentiating 2D drawings with similar contour but different inside structure, such as the retrieval example shown in FIG. 31 (a). Therefore, in practice, the two methods are provided together so that higher retrieval accuracy is achieved by allowing users to switch between the two methods interactively.
For 3D model retrieval, to compare the performance between our proposed methods and other methods, six other available 3D shape descriptors are implemented: (1) Moment Invariants (MI) [53]; (2) Spherical Harmonics (SH) [5]; (3) Crinkliness and Compactness (CC) [54]; (4) 3D Shape Distribution (3DS) [12]; (5) Convex Hull Histogram (CHH) [55]; and (6) Solid Angle Histogram (SAH) [56]. The final PR curves are shown in FIG. 35.
From the precise-recall curves, it has been found that the 2.5DSHT has the best discrimination ability while the 2DSH is the second best method. Similar to the 2D drawing retrieval, the 2.5D spherical harmonics transformation method has been found to be good at differentiating models with readily apparent structure shape, such as table- or chair-like models, while the 2D shape histogram method is good at differentiating models with similar contour but different inside structure. FIG. 36 shows some retrieval examples. The two proposed methods are both rotation invariant descriptors and provide a compact representation of a 2D drawing. With the two methods, the shape matching problem is reduced to several simple steps, such as sampling, normalization, and distance computation between descriptors, and there is no need to determine a common pose and find feature correspondences between different drawings. Generally, the 2.5D spherical harmonics method needs fewer dimensionalities (i.e., fewer signatures) than the 2D shape histogram method does. For example, in our retrieval system, the bandwidth is 64 in the 2.5D spherical harmonics method, i.e., the descriptor of a drawing contains 64 signatures. In contrast, the 2D shape histogram contains more than 200 signatures. However, the 2D shape histogram method allows users to emphasize certain local shapes by specifying a high sampling rate upon these shapes, while it is difficult for the 2.5D spherical harmonics method to do this. Other advantages of the proposed two methods are their simplicity and fast speed. In experiments, the general retrieval time is less than 0.1 seconds and the indexing process for 1,000 drawings is less than ten minutes since the I/O accessing is time-consuming. The computing is done on a PC with 2.4GHz CPU and 512 MB RAM.
Thus, as disclosed above a user can enter two-dimensional input, such as a sketch or drawing, and can search a library of two-dimensional drawings and/or three-dimensional models in order to retrieve the drawing(s) or model(s) that most closely approximate the input. Such input and the library information are compared in two-dimensional space, based on descriptors for two-dimensional drawing or projections. The user can weight or emphasize one projection or feature in searching or refining a search. The user can alter or edit a drawing or model found in searching so as to refine a search, and can choose features of search results so as to narrow the results further.
Referring back generally to FIG. 2(B), one embodiment of such a system is schematically indicated. At box 100, model or drawing information is entered by the user. Box 110 refers to the calculation or computation of an appropriate bounding box for the inputted information, which may include a determination of an appropriate pose as discussed above. At box 120, views of the inputted model or drawing are generated using the bounding box and/or pose, which views may be full views, contour views and/or silhouette views. Box 130 refers to similarity measurement, which may include determination of appropriate descriptors for the inputted information as discussed above and comparison of such descriptors to descriptors for models and/or drawings included in the accessed library. Once such a measurement or comparison is made, appropriate models or drawings are outputted from the library to the user's screen, to disk, to hard copy, or otherwise as the user may desire. The steps may be implemented by hardware and/or software appropriately configured to perform them.
Referring generally to FIG. 3, another embodiment of a system using the methods disclosed herein is schematically shown. Initially, a library 200 having information of three-dimensional models and a library 210 having information of two-dimensional drawings may be provided. These libraries may be provided in one or more memory units or data storage apparatuses. Models from library 200 are analyzed and two-dimensional drawings are generated from them, as depicted at block 220. Generation of the drawings of a particular model may include determination of an appropriate pose of the model, e.g. in an attitude commonly viewed by or drawn by engineers, as depicted at block 222, and generating views of the posed model, e.g. along three orthogonal axes determined by the pose, as depicted at block 224. When drawings are generated of all desired models, the drawings can be collected in a library 230. In a particular embodiment, library 230 can be a part of or associated with library 210 of other two-dimensional drawings, as suggested in FIG. 3.
From the user-input side, an input box is depicted a block 240. Input can include sketches (block 242), two-dimensional drawings (block 244) and/or three- dimensional models (block 246). Sketches or other drawings can be given particular parameters, as indicated in block 250 prior to, at or after input into the system. The inputted information can be edited by the user, if desired. Three- dimensional model input can be posed (if necessary) and two-dimensional views are generated. Block 260 depicts a method of creating descriptors for two- dimensional drawings, such as those methods discussed above. Descriptors can be created for drawings and models from libraries 200 and 210, as well as for two- dimensional drawing(s) inputted by the user. The descriptor(s) of inputted drawing(s) are compared to those of library drawings or views of library models, as depicted in block 270, and the most similar are shown or outputted to the user, as depicted at block 280. The degree of similarity required for output can be determined in advance by the user, as for example by a request to output the forty most similar models or drawings, or can be set in or by the system. The retrieved results can be displayed as the drawings of library 210 or of the pseudo-two- dimensional drawing library 230, as depicted at block 282, or can be displayed as three-dimensional models as depicted at block 284. If the user desires, the set of output results can be further queried for particular features, or by searching further with a weighted or emphasized view as discussed above, or otherwise further analyzed.

Claims

What is claimed is:
1. A method of posing a computer-based representation of a three- dimensional shape, said representation being formed of multiple polygons, comprising: calculating an area value of polygons having similar normals and signed distances; associating a normal with the area value; and selecting the normal with the maximum area value.
2. The method of claim 1, wherein said area value of the polygons is a continuous area.
3. The method of claim 1 , wherein said area value is a virtual contact area.
4. The method of claim 1, further comprising orienting said representation so that one of said axis vectors is a vertical axis.
5. The method of claim 1 , further comprising: creating a two-dimensional projection of said representation along one of said axis vectors.
6. The method of claim 5, wherein said projection is one of a full view, a contour view and a silhouette view of said representation.
7. The method of claim 1 , further comprising: creating a first two-dimensional projection of said representation along a first of said axis vectors, creating a second two-dimensional projection of said representation along a second of said axis vectors, and creating a third two- dimensional projection of said representation along a third of said axis vectors.
8. A method comprising: projecting a representation of a three-dimensional object into three two- dimensional drawings, said drawings including silhouette and contour views.
9. The method of claim 8, further comprising editing one or more of the two- dimensional drawings.
10. The method of claim 8, wherein said representation is used in a computer system for searching for representations similar to said representation.
11. A method comprising: providing at least one two-dimensional drawing of an object to a computer system; creating one or more descriptors for said at least one drawing, said descriptor being adapted for use in a search of two-dimensional drawings.
12. The method of claim 11, wherein said creating step includes expanding the at least one drawing to three dimensions to create a new 2.5D shape and representing the 2.5D shape as a spherical function and employing a spherical harmonic transformation, thereby obtaining a rotation invariant descriptor.
13. The method of claim 12, wherein said drawing is oriented with respect to two axes in a plane, and wherein said expanding a drawing to three dimensions includes: computing a bounding box aligned with said axes, said bounding box having a center and two diagonals, said diagonals having lengths that are either equal or unequal; determining a sphere having as its center the center of said bounding box, said sphere further having a radius of half the longer of said diagonal lengths of said bounding box and having an equator plane that is coplanar with the plane of said drawing; generating a set of rays in said equator plane that originate from said center of said sphere and calculating the points of intersection between said rays and said drawing, said points being identified by coordinates, said rays having a unique angle; calculating a spherical angle unique to each intersection point distributed along a ray, said spherical angle being the arctan of the distance to the intersection point from said center divided by said radius; and lifting each intersection point out of said plane of said drawing by the spherical angle unique to said intersection point.
14. The method of claim 13, wherein said bounding box has a first edge parallel to one of said axes and a second edge parallel to a second of said axes, the method further comprising: normalizing one of said edges by a predetermined value.
15. The method of claim 12, wherein said transformation is a fast spherical transformation, wherein a spherical function of bandwidth B is sampled on Chebyshev points.
16. The method of claim 11, wherein said creating step includes building a distance histogram for said drawing.
17. The method of claim 16, wherein said building step includes transforming said drawing into a set of line segments; sampling points from said segments to derive a shape distribution.
18. The method of claim 17, wherein said sampling step includes summing the length of said segments to obtain a total length value; generating a first random number between zero and said total length value; finding the segment along whose length said first random number falls, said segment having a first endpoint with a first and second coordinate and a second endpoint with a first and second coordinate; generating a second random number between zero and one, said second random number being a real number; creating a sample point having first and second coordinates, said first coordinate of said sample point determined by multiplying the difference between the first coordinates of said endpoints by said second random number and adding the first coordinate of said first endpoint, and said second coordinate of said sample point determined by multiplying the difference between the second coordinates of said endpoints by said second random number and adding the second coordinate of said first endpoint.
19. The method of claim 18, wherein said steps of generating a first random number, finding the segment, generating a second random number, and creating a sample point are repeated at least once, thereby producing a plurality of sample points.
20. The method of claim 17, further comprising normalizing said drawing.
21. The method of claim 17, further comprising: defining a division number for the distance histogram, said division number identifying the number of parts of the distance histogram; determining a standard value for normalization; and calculating the distances between sample point pairs and counting the number of pairs in each part of the distance histogram.
22. The method of claim 21, wherein said determining step includes calculating the maximum distance among all pairs of sampled points, and using said maximum distance as said standard value.
23. The method of claim 21, wherein said determining step includes calculating the average distance between all pairs of sampled points, and using said average distance as said standard value.
24. The method of claim 17, wherein said sampling incorporates a bias that emphasizes at least one particular part of said drawing.
25. A method comprising: inputting a two-dimensional drawing into a computer system having access to a library of representations of objects; creating one or more descriptors for said two-dimensional drawing; searching said library of by comparing said one or more descriptors to data of said representations; and outputting a set of representations of objects that are determined to be sufficiently similar to said two-dimensional drawing.
26. The method of claim 25, wherein said representations are two-dimensional representations of objects.
27. The method of claim 25, wherein said representations are three- dimensional representations of objects.
28. The method of claim 27, wherein said data includes one or more descriptors of one or more two-dimensional views of said representations.
29. The method of claim 28, wherein said creating step includes one of using a spherical transform to create said descriptors and using a histogram to create said descriptors.
30. The method of claim 28, wherein said descriptors of said views of said representations are predetermined and stored in said library.
31. The method of claim 28, wherein said descriptors of said views of said representations are calculated after said inputting step.
32. The method of claim 25, further comprising the step of editing said drawing after said inputting step.
33. The method of claim 25, further comprising: entering at least one parameter after said outputting step, searching said set of representations for representations having said parameter; and outputting a subset of said set of representations.
34. The method of claim 33, wherein said parameter is a higher sensitivity of comparison between said one or more descriptors of said drawing and said one or more descriptors of said representations.
35. The method of claim 33, wherein said parameter is a desired feature of said representations.
36. A method comprising: inputting a two-dimensional drawing into a computer system having access to a set of two-dimensional representations of objects; measuring the similarity between said drawing and said representations, said measuring including randomly sampling view edges of said drawing and said representations, generating shape distributions of said drawing and said representations, normalizing said shape distributions, and ascertaining degrees of correspondence between said shape distributions.
37. The method of claim 36 wherein at least some of said two-dimensional representations are projected from three-dimensional representations of said objects.
38. A method comprising: receiving a set of results from a search conducted using at least one input parameter; inputting a new parameter; searching said set of results with respect to said new parameter; and obtaining a subset of said results.
39. A method comprising: receiving a set of results from a search conducted using at least one input parameter; altering said input parameter; searching a set of results with respect to said altered parameter; and obtaining a subset of said results.
40. The method of claim 39 wherein said altering step includes erasing features from said input parameter.
41. The method of claim 39 wherein said altering step includes adding features from said input parameter.
PCT/US2005/042925 2004-11-29 2005-11-29 Methods for retrieving shapes and drawings WO2006058292A2 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US63146804P 2004-11-29 2004-11-29
US60/631,468 2004-11-29
US66400505P 2005-03-21 2005-03-21
US60/664,005 2005-03-21

Publications (4)

Publication Number Publication Date
WO2006058292A2 WO2006058292A2 (en) 2006-06-01
WO2006058292A9 true WO2006058292A9 (en) 2006-08-03
WO2006058292A3 WO2006058292A3 (en) 2008-04-17
WO2006058292A8 WO2006058292A8 (en) 2008-08-21

Family

ID=36498603

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2005/042925 WO2006058292A2 (en) 2004-11-29 2005-11-29 Methods for retrieving shapes and drawings

Country Status (2)

Country Link
US (2) US7583272B2 (en)
WO (1) WO2006058292A2 (en)

Families Citing this family (83)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7586490B2 (en) * 2004-10-20 2009-09-08 Siemens Aktiengesellschaft Systems and methods for three-dimensional sketching
JP4838079B2 (en) * 2006-09-07 2011-12-14 株式会社リコー Part identification image creating apparatus, program, and computer-readable storage medium
US8122370B2 (en) * 2006-11-27 2012-02-21 Designin Corporation Visual bookmarks for home and landscape design
US8117558B2 (en) * 2006-11-27 2012-02-14 Designin Corporation Converting web content into two-dimensional CAD drawings and three-dimensional CAD models
US8253731B2 (en) 2006-11-27 2012-08-28 Designin Corporation Systems, methods, and computer program products for home and landscape design
US20080126023A1 (en) * 2006-11-27 2008-05-29 Ramsay Hoguet Searching and Matching Related objects, Drawings and Models For Home and Landscape Design
US9519997B1 (en) * 2007-03-09 2016-12-13 Pixar Perfect bounding for optimized evaluation of procedurally-generated scene data
US7936354B2 (en) * 2007-04-27 2011-05-03 Graphisoft R&D Zrt. Virtual trace-multiple view modeling system and method
US8217938B2 (en) * 2007-05-31 2012-07-10 Ford Global Technologies, Llc Method and apparatus for determining item orientation
CN101350016B (en) * 2007-07-20 2010-11-24 富士通株式会社 Device and method for searching three-dimensional model
US9030462B2 (en) * 2007-09-24 2015-05-12 Siemens Corporation Sketching three-dimensional(3D) physical simulations
US20090157649A1 (en) * 2007-12-17 2009-06-18 Panagiotis Papadakis Hybrid Method and System for Content-based 3D Model Search
US20090222446A1 (en) * 2008-03-03 2009-09-03 Emmanuel Goldschmidt Three-dimensional object identification through resonance frequencies analysis
DE102009032333A1 (en) * 2008-07-30 2010-02-18 Luk Lamellen Und Kupplungsbau Beteiligungs Kg Method for testing models
US8600149B2 (en) * 2008-08-25 2013-12-03 Telesecurity Sciences, Inc. Method and system for electronic inspection of baggage and cargo
JP5104743B2 (en) * 2008-12-16 2012-12-19 富士通株式会社 Image search program, image search method, and recording medium
US8260583B2 (en) * 2009-03-12 2012-09-04 Siemens Product Lifecycle Management Software Inc. System and method for identifying wall faces in an object model
US8686992B1 (en) * 2009-03-30 2014-04-01 Google Inc. Methods and systems for 3D shape matching and retrieval
EP2284770A1 (en) * 2009-07-06 2011-02-16 Stavros Perantonis Method and system for content-based search and retrieval of 3D models using panoramic views
US8175734B2 (en) * 2009-10-08 2012-05-08 3D M. T. P. Ltd. Methods and system for enabling printing three-dimensional object models
US8537158B2 (en) * 2009-12-02 2013-09-17 Microsoft Corporation Parallel triangle tessellation
US9449026B2 (en) * 2010-08-31 2016-09-20 Microsoft Technology Licensing, Llc Sketch-based image search
KR101494066B1 (en) 2010-10-05 2015-02-16 엠파이어 테크놀로지 디벨롭먼트 엘엘씨 Generation of depth data based on spatial light pattern
US8836699B2 (en) * 2011-02-04 2014-09-16 Chiung Yu Chen Generation of landmark architecture and sculpture based on chinese characters
DE102011010975A1 (en) * 2011-02-10 2012-08-16 Martin Tank Method and analysis system for geometrical analysis of scan data of oral structures
US8811719B2 (en) 2011-04-29 2014-08-19 Microsoft Corporation Inferring spatial object descriptions from spatial gestures
WO2012149961A1 (en) * 2011-05-03 2012-11-08 Fujitsu Limited Computer - implemented method of simplifying a complex part in a geometric model
US8954431B2 (en) * 2011-06-09 2015-02-10 Xerox Corporation Smart collaborative brainstorming tool
US20130257885A1 (en) * 2012-03-28 2013-10-03 Intel Corporation Low Power Centroid Determination and Texture Footprint Optimization For Decoupled Sampling Based Rendering Pipelines
US20130275431A1 (en) 2012-04-12 2013-10-17 Nainesh Rathod Visual clustering method
US8941645B2 (en) * 2012-05-11 2015-01-27 Dassault Systemes Comparing virtual and real images in a shopping experience
US9953458B2 (en) * 2012-05-22 2018-04-24 Thompson Licensing Sa Method and apparatus for generating shape descriptor of a model
US9824601B2 (en) 2012-06-12 2017-11-21 Dassault Systemes Symbiotic helper
JP2014035656A (en) * 2012-08-09 2014-02-24 Sony Corp Image processing apparatus, image processing method, and program
MY181673A (en) * 2012-12-13 2020-12-31 Mimos Berhad An image processing system and a method for extracting spatial relationship between objects in an image
US9734625B2 (en) * 2013-01-28 2017-08-15 The Boeing Company Panoptic visualization of a three-dimensional representation of a complex system
WO2014121129A1 (en) * 2013-01-31 2014-08-07 Sandboxr, Llc Method and system for 3-d printing product customization
JP5585671B2 (en) * 2013-02-01 2014-09-10 Jfeスチール株式会社 Shape optimization analysis method and apparatus
WO2014130417A1 (en) * 2013-02-19 2014-08-28 Siemens Aktiengesellschaft Multi disciplinary engineering design using image recognition
US9208176B2 (en) 2013-03-12 2015-12-08 International Business Machines Corporation Gesture-based image shape filtering
GB2514320B (en) * 2013-03-15 2020-01-29 3D Industries Ltd A method and system for 3D model database retrieval
US9147125B2 (en) 2013-05-03 2015-09-29 Microsoft Technology Licensing, Llc Hand-drawn sketch recognition
US9355462B2 (en) 2013-05-08 2016-05-31 Caterpillar Inc. Motion estimation system utilizing point cloud registration
US10074121B2 (en) 2013-06-20 2018-09-11 Dassault Systemes Shopper helper
JP6342180B2 (en) * 2014-02-21 2018-06-13 公立大学法人首都大学東京 3D CAD model similarity search method
US11501042B2 (en) 2014-03-24 2022-11-15 Imagars Llc Decisions with big data
US9922359B2 (en) * 2014-04-08 2018-03-20 Imaginestics, Llc Three-dimensional image searching using active label technology
CN105701263A (en) * 2014-11-26 2016-06-22 英业达科技有限公司 Three-dimensional model part comparison system and method
CN104392050A (en) * 2014-11-27 2015-03-04 英业达科技有限公司 Three-dimensional model component comparison system and method thereof
US10115035B2 (en) * 2015-01-08 2018-10-30 Sungkyunkwan University Foundation For Corporation Collaboration Vision system and analytical method for planar surface segmentation
US10435279B2 (en) 2015-03-06 2019-10-08 Walmart Apollo, Llc Shopping space route guidance systems, devices and methods
US20180099846A1 (en) 2015-03-06 2018-04-12 Wal-Mart Stores, Inc. Method and apparatus for transporting a plurality of stacked motorized transport units
WO2016142794A1 (en) * 2015-03-06 2016-09-15 Wal-Mart Stores, Inc Item monitoring system and method
EP3098734A1 (en) * 2015-05-28 2016-11-30 Dassault Systèmes Querying a database with likeness criterion
US9721356B2 (en) 2015-08-18 2017-08-01 Honeywell International Inc. Methods and systems for programatically identifying shapes in graphical artifacts
CA2961938A1 (en) 2016-04-01 2017-10-01 Wal-Mart Stores, Inc. Systems and methods for moving pallets via unmanned motorized unit-guided forklifts
US11468111B2 (en) * 2016-06-01 2022-10-11 Microsoft Technology Licensing, Llc Online perspective search for 3D components
EP3264286B1 (en) 2016-06-28 2020-11-18 Dassault Systèmes Querying a database with morphology criterion
US9858638B1 (en) 2016-08-30 2018-01-02 Alex Simon Blaivas Construction and evolution of invariants to rotational and translational transformations for electronic visual image recognition
US9727945B1 (en) 2016-08-30 2017-08-08 Alex Simon Blaivas Construction and evolution of invariants to rotational and translational transformations for electronic visual image recognition
EP3321817A1 (en) * 2016-11-14 2018-05-16 Dassault Systèmes Querying a database based on a parametric view function
TWI607825B (en) * 2016-11-29 2017-12-11 財團法人工業技術研究院 System and method for optimizing machining process of nc program
US20180268614A1 (en) * 2017-03-16 2018-09-20 General Electric Company Systems and methods for aligning pmi object on a model
US10890540B2 (en) * 2017-03-21 2021-01-12 Asml Netherlands B.V. Object identification and comparison
US11281824B2 (en) 2017-12-13 2022-03-22 Dassault Systemes Simulia Corp. Authoring loading and boundary conditions for simulation scenarios
EP3503030A1 (en) * 2017-12-22 2019-06-26 The Provost, Fellows, Foundation Scholars, & the other members of Board, of the College of the Holy & Undiv. Trinity of Queen Elizabeth, Method and apparatus for generating a three-dimensional model
IL262881B (en) * 2018-11-08 2020-08-31 Neuberger Almog A method for transforming 3-dimensional image data into a 2-dimensional image
JP6900946B2 (en) * 2018-12-25 2021-07-14 横河電機株式会社 Engineering support system and engineering support method
CN109857886B (en) * 2018-12-28 2020-12-15 浙江工业大学 Three-dimensional model retrieval method based on minimum maximum value game theory view approximation
US10957080B2 (en) * 2019-04-02 2021-03-23 Adobe Inc. Automatic illustrator guides
US10909698B2 (en) 2019-05-03 2021-02-02 Procore Technologies, Inc. Pattern matching tool
US11423630B1 (en) * 2019-06-27 2022-08-23 Amazon Technologies, Inc. Three-dimensional body composition from two-dimensional images
CN110827393B (en) * 2019-07-17 2020-07-31 当家移动绿色互联网技术集团有限公司 Image processing method, image processing device, storage medium and electronic equipment
US11232629B1 (en) 2019-08-30 2022-01-25 Amazon Technologies, Inc. Two-dimensional image collection for three-dimensional body composition modeling
CN110569386A (en) * 2019-09-16 2019-12-13 哈尔滨理工大学 Three-dimensional model retrieval method based on hand-drawn sketch integrated descriptor
US11903730B1 (en) 2019-09-25 2024-02-20 Amazon Technologies, Inc. Body fat measurements from a two-dimensional image
US11069131B2 (en) 2019-09-26 2021-07-20 Amazon Technologies, Inc. Predictive personalized three-dimensional body models
US11735306B2 (en) 2019-11-25 2023-08-22 Dentsply Sirona Inc. Method, system and computer readable storage media for creating three-dimensional dental restorations from two dimensional sketches
US11199940B1 (en) * 2020-04-21 2021-12-14 Corel Corporation Three-dimensional operations based on planar projections in graphic user interfaces
US11854146B1 (en) 2021-06-25 2023-12-26 Amazon Technologies, Inc. Three-dimensional body composition from two-dimensional images of a portion of a body
US11887252B1 (en) 2021-08-25 2024-01-30 Amazon Technologies, Inc. Body model composition update from two-dimensional face images
US11861860B2 (en) 2021-09-29 2024-01-02 Amazon Technologies, Inc. Body dimensions from two-dimensional body images
CN115330938B (en) * 2022-08-05 2023-06-20 北京航空航天大学 Method for generating three-dimensional point cloud based on sketch of projection density map sampling

Family Cites Families (71)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6317953B1 (en) 1981-05-11 2001-11-20 Lmi-Diffracto Vision target based assembly
DE69132486T2 (en) 1990-09-04 2001-05-10 Canon Kk Three-dimensional modeling device and method
US5425109A (en) 1992-10-22 1995-06-13 Mutoh Industries Ltd. System for identifying freehand drawings
US5471541A (en) 1993-11-16 1995-11-28 National Research Council Of Canada System for determining the pose of an object which utilizes range profiles and synethic profiles derived from a model
US5864342A (en) 1995-08-04 1999-01-26 Microsoft Corporation Method and system for rendering graphical objects to image chunks
US6525722B1 (en) 1995-08-04 2003-02-25 Sun Microsystems, Inc. Geometry compression for regular and irregular mesh structures
US5943235A (en) 1995-09-27 1999-08-24 3D Systems, Inc. Rapid prototyping system and method with support region data processing
JP2909616B2 (en) * 1995-10-27 1999-06-23 株式会社超高速ネットワーク・コンピュータ技術研究所 3D shape display method
US5988862A (en) 1996-04-24 1999-11-23 Cyra Technologies, Inc. Integrated system for quickly and accurately imaging and modeling three dimensional objects
JP3807796B2 (en) 1996-10-04 2006-08-09 本田技研工業株式会社 Three-dimensional CAD system and image data conversion method
US6631364B1 (en) * 1997-03-26 2003-10-07 National Research Council Of Canada Method of searching 3-Dimensional images
US6047041A (en) 1997-09-08 2000-04-04 Scientific Measurement System Apparatus and method for comparison
JPH1178384A (en) 1997-09-12 1999-03-23 Peteio:Kk Three-dimensional shape data processing device, plate to be carved, and carving device
US5990897A (en) 1997-09-12 1999-11-23 Hanratty; Patrick J. Methods for automatically generating a three-dimensional geometric solid from two-dimensional view sets including automatic segregation of open, closed and disjoint curves into views using their center of gravity
US6026189A (en) 1997-11-13 2000-02-15 National Research Council Of Canada Method of recognizing objects within two-dimensional and three-dimensional images
US5980084A (en) 1997-11-24 1999-11-09 Sandia Corporation Method and apparatus for automated assembly
US6377865B1 (en) 1998-02-11 2002-04-23 Raindrop Geomagic, Inc. Methods of generating three-dimensional digital models of objects by wrapping point cloud data points
US6628824B1 (en) 1998-03-20 2003-09-30 Ken Belanger Method and apparatus for image identification and comparison
US6898315B2 (en) * 1998-03-23 2005-05-24 Microsoft Corporation Feature extraction for real-time pattern recognition using single curve per pattern analysis
US6411734B1 (en) 1998-04-03 2002-06-25 Cognex Corporation Object image search using validated sub-model poses
US6324299B1 (en) 1998-04-03 2001-11-27 Cognex Corporation Object image search using sub-models
US6362817B1 (en) 1998-05-18 2002-03-26 In3D Corporation System for creating and viewing 3D environments using symbolic descriptors
JP3256180B2 (en) 1998-06-09 2002-02-12 株式会社モノリス Method for encrypting and decrypting three-dimensional shape data
US6246412B1 (en) 1998-06-18 2001-06-12 Microsoft Corporation Interactive construction and refinement of 3D models from multiple panoramic images
US6421048B1 (en) 1998-07-17 2002-07-16 Sensable Technologies, Inc. Systems and methods for interacting with virtual objects in a haptic virtual reality environment
US5937235A (en) * 1998-07-30 1999-08-10 Xerox Corporation Reproduction machine including a developer material cartridge having a non-interfering dual-use sealing device
US6243093B1 (en) 1998-09-14 2001-06-05 Microsoft Corporation Methods, apparatus and data structures for providing a user interface, which exploits spatial memory in three-dimensions, to objects and which visually groups matching objects
US6222551B1 (en) 1999-01-13 2001-04-24 International Business Machines Corporation Methods and apparatus for providing 3D viewpoint selection in a server/client arrangement
US6578017B1 (en) 1999-02-26 2003-06-10 Information Decision Technologies, Llc Method to aid object detection in images by incorporating contextual information
US6614926B1 (en) 1999-06-29 2003-09-02 Cognex Corporation Methods and apparatuses for generating from an image a model of an object
US20040243483A1 (en) 1999-07-30 2004-12-02 Web2Cad Ag Mechanical engineering web portal
US6556704B1 (en) 1999-08-25 2003-04-29 Eastman Kodak Company Method for forming a depth image from digital image data
DE19944516B4 (en) 1999-09-16 2006-08-17 Brainlab Ag Three-dimensional shape detection with camera images
US6549201B1 (en) 1999-11-23 2003-04-15 Center For Advanced Science And Technology Incubation, Ltd. Method for constructing a 3D polygonal surface from a 2D silhouette by using computer, apparatus thereof and storage medium
US6771260B1 (en) 1999-12-13 2004-08-03 Amada Company, Limited Sketcher
US6441816B1 (en) * 1999-12-29 2002-08-27 Intel Corporation Method for modeling and rendering complex surfaces using local height maps
JP3887136B2 (en) 2000-01-14 2007-02-28 株式会社日立製作所 Object display method in virtual three-dimensional space and medium recording display program
US6611725B1 (en) 2000-02-03 2003-08-26 Solidworks Corporation Computer drawing system
FR2806158B1 (en) 2000-03-07 2002-05-17 Commissariat Energie Atomique METHOD FOR DETERMINING THE POSITION OR ORIENTATION OF AN OBJECT USING A MAGNETIC FIELD AND CORRESPONDING DEVICE
JP4341135B2 (en) 2000-03-10 2009-10-07 コニカミノルタホールディングス株式会社 Object recognition device
US6801882B1 (en) 2000-03-22 2004-10-05 General Electric Company Methods and systems for generating profile curves of solid models
JP3513562B2 (en) 2000-04-20 2004-03-31 インターナショナル・ビジネス・マシーンズ・コーポレーション Shape analysis system, three-dimensional shape model difference detection system, similar shape search system, shape analysis method, and storage medium
US6708071B1 (en) 2000-08-15 2004-03-16 Vought Aircraft Industries, Inc. Method and system for defining and verifying a part
US20020067354A1 (en) 2000-10-02 2002-06-06 Beom-Soo Oh Method and system for reconstructing 3D objects from free-hand line drawing
US6771276B1 (en) 2000-10-27 2004-08-03 Macromedia, Inc. Two-dimensional drawing environment utilizing perspective grids
US6914618B2 (en) 2000-11-02 2005-07-05 Sun Microsystems, Inc. Methods and systems for producing A 3-D rotational image from A 2-D image
US6636211B2 (en) 2000-12-15 2003-10-21 Dassault Systemes CAD/CAM feature tree with manipulatable 3D miniatures
US20020091679A1 (en) 2001-01-09 2002-07-11 Wright James E. System for searching collections of linked objects
JP3817689B2 (en) 2001-02-08 2006-09-06 株式会社日立製作所 CAD information management system and CAD information management method
US6804416B1 (en) 2001-03-16 2004-10-12 Cognex Corporation Method and system for aligning geometric object models with images
US6741248B2 (en) 2001-04-04 2004-05-25 Mitsubishi Electric Research Laboratories, Inc. Rendering geometric features of scenes and models by individual polygons
US6831643B2 (en) 2001-04-16 2004-12-14 Lucent Technologies Inc. Method and system for reconstructing 3D interactive walkthroughs of real-world environments
JP4635392B2 (en) 2001-08-09 2011-02-23 コニカミノルタホールディングス株式会社 3D object surface shape modeling apparatus and program
JP2005502111A (en) * 2001-08-31 2005-01-20 ソリッドワークス コーポレイション Simultaneous use of 2D and 3D modeling data
US6744434B2 (en) 2001-11-30 2004-06-01 Caterpillar Inc Cuts removal system for triangulated CAD Models
US20040037463A1 (en) 2002-01-28 2004-02-26 Calhoun Christopher L. Recognizing multi-stroke symbols
JP2004005364A (en) 2002-04-03 2004-01-08 Fuji Photo Film Co Ltd Similar image retrieval system
WO2003088085A1 (en) * 2002-04-04 2003-10-23 Arizona Board Of Regents Three-dimensional digital library system
US6928331B2 (en) 2002-04-10 2005-08-09 David Robert Wishengrad Orientation method
US7761265B2 (en) 2002-05-06 2010-07-20 Drexel University Method for comparing solid models
US7469242B2 (en) 2002-05-23 2008-12-23 The Boeing Company Central based computer network of solid models and associated data with search capability
US6831642B2 (en) * 2002-07-19 2004-12-14 Silicon Graphics, Inc. Method and system for forming an object proxy
JP2004164503A (en) 2002-11-15 2004-06-10 Olympus Corp Three-dimensional model retrieval method, three-dimensional model retrieval device, three-dimensional model retrieval program and three-dimensional model retrieval system
KR100513735B1 (en) 2002-12-05 2005-09-08 삼성전자주식회사 Method of perceptual 3D shape description and method and apparatus for searching 3D graphics model database using the same
EP1590762A4 (en) 2003-01-16 2007-07-25 Intelitrac Inc A method and or system to perform automated facial recognition and comparison using multiple 2d facial images parsed from a captured 3d facial image
EP1586020A2 (en) 2003-01-25 2005-10-19 Purdue Research Foundation Methods, systems, and data structures for performing searches on three dimensional objects
US20040156064A1 (en) * 2003-02-07 2004-08-12 Kevin Owen Printing methods and apparatus
WO2004075022A2 (en) 2003-02-20 2004-09-02 Schickler John F Parts search system
US7343039B2 (en) * 2003-06-13 2008-03-11 Microsoft Corporation System and process for generating representations of objects using a directional histogram model and matrix descriptor
WO2005027039A2 (en) 2003-09-08 2005-03-24 Laser Projection Technologies, Inc. 3d projection with image recording
KR100550329B1 (en) 2003-11-15 2006-02-08 한국전자통신연구원 An Apparatus and Method for Protein Structure Comparison and Search Using 3 Dimensional Edge Histogram

Also Published As

Publication number Publication date
US8982147B2 (en) 2015-03-17
US20060114252A1 (en) 2006-06-01
US7583272B2 (en) 2009-09-01
WO2006058292A3 (en) 2008-04-17
WO2006058292A8 (en) 2008-08-21
US20100076959A1 (en) 2010-03-25
WO2006058292A2 (en) 2006-06-01

Similar Documents

Publication Publication Date Title
US7583272B2 (en) Methods for retrieving shapes and drawings
Papadakis et al. PANORAMA: A 3D shape descriptor based on panoramic views for unsupervised 3D object retrieval
Ansary et al. A bayesian 3-d search engine using adaptive views clustering
Bimbo et al. Content-based retrieval of 3D models
Kazmi et al. A survey of 2D and 3D shape descriptors
Assfalg et al. Content-based retrieval of 3-D objects using spin image signatures
Lian et al. Visual similarity based 3D shape retrieval using bag-of-features
US7343039B2 (en) System and process for generating representations of objects using a directional histogram model and matrix descriptor
US20090157649A1 (en) Hybrid Method and System for Content-based 3D Model Search
Sfikas et al. Partial matching of 3D cultural heritage objects using panoramic views
Guo et al. 3D object recognition from cluttered and occluded scenes with a compact local feature
Zhang et al. Survey on 3D shape descriptors
Akgül et al. Density-based 3D shape descriptors
Lin et al. Three-dimensional CAD model matching with anisotropic diffusion maps
Zhou 3D urban modeling from city-scale aerial LiDAR data
Liu et al. Thickness histogram and statistical harmonic representation for 3D model retrieval
Leng et al. A 3D shape retrieval framework for 3D smart cities
Schmitt et al. A 3D shape descriptor based on depth complexity and thickness histograms
Mahmoudi et al. 3D objects retrieval using curvature scale space and Zernike moments
CN109272013A (en) Method for measuring similarity based on study
Ramezani et al. 3D Models' retrieval system design based on Poisson's histogram of 2D selective views
Rustamov Augmented planar reflective symmetry transform
Ohishi et al. Densely sampled local visual features on 3d mesh for retrieval
Lin et al. Matching of 3D CAD models with density-based approaches: An experimental evaluation of the invariance, bin size and noise robustness
Wen et al. Lightweight Web3D modeling by finding and reusing repeated components

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KM KN KP KR KZ LC LK LR LS LT LU LV LY MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU LV MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 05852290

Country of ref document: EP

Kind code of ref document: A2