Número de publicación | US8589369 B2 |

Tipo de publicación | Concesión |

Número de solicitud | US 13/360,483 |

Fecha de publicación | 19 Nov 2013 |

Fecha de presentación | 27 Ene 2012 |

Fecha de prioridad | 5 Mar 2001 |

Tarifa | Pagadas |

También publicado como | CN1462399A, EP1366437A1, US7283987, US7797296, US8121987, US9069809, US20020169784, US20080033989, US20100287144, US20120124010, US20140081930, WO2002071270A1 |

Número de publicación | 13360483, 360483, US 8589369 B2, US 8589369B2, US-B2-8589369, US8589369 B2, US8589369B2 |

Inventores | Sang K. Cha, Ki-Hong Kim, Keun-Joo Kwon |

Cesionario original | Sap Ag |

Exportar cita | BiBTeX, EndNote, RefMan |

Citas de patentes (35), Otras citas (19), Citada por (3), Clasificaciones (15), Eventos legales (2) | |

Enlaces externos: USPTO, Cesión de USPTO, Espacenet | |

US 8589369 B2

Resumen

The apparatuses and methods described herein may operate to identify, from an index structure stored in memory, a reference minimum bounding shape that encloses at least one minimum bounding shape. Each of the at least one minimum bounding shape may correspond to a data object associated with a leaf node of the index structure. Coordinates of a point of the at least one minimum bounding shape may be associated with a set of first values to produce a relative representation of the at least one minimum bounding shape. The set of first values may be calculated relative to coordinates of a reference point of the reference minimum bounding shape such that each of the set of first values comprises a first number of significant bits fewer than a second number of significant bits representing a second value associated with a corresponding one of absolute coordinates of the point.

Reclamaciones(20)

1. A method, comprising:

identifying, from an index structure stored in memory, a reference minimum bounding shape that encloses at least one minimum bounding shape, each of the at least one minimum bounding shape corresponding to a data object associated with a leaf node of the index structure; and

associating, using one or more processors, coordinates of a point of the at least one minimum bounding shape with a set of first values to produce a relative representation of the at least one minimum bounding shape, the set of first values being calculated relative to coordinates of a reference point of the reference minimum bounding shape such that each first value comprises a first number of significant bits fewer than a second number of significant bits representing a second value associated with a corresponding one of absolute coordinates of the point.

2. The method of claim 1 , further comprising:

compressing the relative representation using a finite level of quantization to produce a quantized representation of the at least one minimum bounding shape.

3. The method of claim 2 , wherein the compressing comprises:

choosing the finite level of quantization from a set of quantization levels.

4. The method of claim 2 , further comprising:

storing the quantized representation in at least one node of the index structure.

5. The method of claim 2 , further comprising:

responsive to a query, searching the index structure using the quantized representation.

6. The method of claim 5 , wherein the searching comprises:

transforming a minimum bounding shape corresponding to the query into another quantized representation.

7. The method of claim 6 , wherein the searching comprises:

comparing the quantized representation with the other quantized representation to determine whether there is an overlap.

8. The method of claim 7 , wherein the searching comprises:

determining that an entry corresponding to the quantized representation matches the query responsive to the overlap being identified.

9. The method of claim 2 , further comprising:

recalculating the quantized representation responsive to a change in the at least one minimum bounding shape being identified.

10. The method of claim 1 , where the data object is stored in main memory or a disk.

11. The method of claim 1 , wherein the index structure comprises at least one of an R-tree, an R* -tree, an R+-tree or a Hilbert R-tree.

12. A system, comprising:

memory to store an index structure; and

one or more processors to execute a compression engine, the compression engine configured to:

identify, from the index structure, a reference minimum bounding shape that encloses at least one minimum bounding shape, each of the at least one minimum bounding shape corresponding to a data object associated with a leaf node of the index structure; and

associate coordinates of a point of the at least one minimum bounding shape with a set of first values to produce a relative representation of the at least one minimum bounding shape, the set of first values being calculated relative to coordinates of a reference point of the reference minimum bounding shape such that each first value comprises a first number of significant bits fewer than a second number of significant bits representing a second value associated with a corresponding one of absolute coordinates of the point.

13. The system of claim 12 , wherein the compression engine is configured to compress the relative representation using a finite level of quantization to produce a quantized representation of the at least one minimum bounding shape.

14. The system of claim 13 , wherein at least one internal node of the index structure comprises a plurality of entries including a first entry, a second entry and a third entry, the first entry storing the reference minimum bounding shape, the second entry storing a pointer to a child node, and the third entry storing the quantized representation.

15. The system of claim 14 , wherein the pointer is the only pointer stored in the at least one internal node.

16. The system of claim 13 , wherein at least one non-root node of the index structure comprises a plurality of entries including a first entry and a second entry, the first entry storing a pointer to a child node, and the second entry storing the quantized representation.

17. The system of claim 16 , wherein the reference minimum bounding share is stored only in a root node of the index structure.

18. The system of claim 13 , wherein the reference minimum bounding shape or the quantized representation is stored only in non-leaf nodes of the index structure.

19. A non-transitory computer-readable storage device storing instructions that, when executed by one or more processors, cause the one or more processor to perform operations comprising:

identifying, from an index structure stored in memory, a reference minimum bounding shape that encloses at least one minimum bounding shape, each of the at least one minimum bounding shape corresponding to a data object associated with a leaf node of the index structure; and

associating coordinates of a point of the at least one minimum bounding shape with a set of first values to produce a relative representation of the at least one minimum bounding shape, the set of first values being calculated relative to coordinates of a reference point of the reference minimum bounding shape such that each first value comprises a first number of significant bits fewer than a second number of significant bits representing a second value associated with a corresponding one of absolute coordinates of the point.

20. The non-transitory computer-readable storage device of claim 19 , wherein the operation comprises:

compressing the relative representation using a finite level of quantization to produce a quantized representation of the at least one minimum bounding shape.

Descripción

The present application is a continuation of U.S. patent application Ser. No.12/847,475 filed Jul. 30, 2010, now issued as U.S. Pat. No. 8,121,987, which is a continuation of U.S. patent application Ser. No. 11/867,115 filed Oct. 4, 2007, now issued as U.S. Pat. No. 7,797,296, which is a continuation of U.S. application Ser. No. 10/087,360 filed Mar. 1, 2002, now issued as U.S. Pat. No. 7,283,987, and claims the benefit of U.S. Provisional Application Ser. No. 60/272,828, filed Mar. 5, 2001, entitled “COMPRESSION SCHEME FOR IMPROVING INDEX CACHE BEHAVIOR IN MAIN-MEMORY DATABASE,” all of which applications are incorporated herein by reference in their entireties.

I. Technical Field

Various embodiments of the invention relate generally to database systems. More particularly, various embodiments of the invention relate to a compression scheme for improving index cache behavior in main-memory database systems.

II. Description of the Related Art

With server DRAM modules priced at less than $2,000/GB, many of the database tables and indexes can now fit in the main memory of modern computer systems. It is predicted that it will be common to have terabytes of main memory for a database within ten years or so.

With such a large amount of memory, the traditional bottleneck of disk access almost disappears, especially for search transactions. Instead, memory access becomes a new bottleneck. A recent study with commercial DBMSs shows that half of the execution time is spent on memory access when the whole database resides in memory.

Since the speed in DRAM chips has been traded off for the capacity, the gap between the CPU speed and the DRAM speed has grown significantly during the past decade. In today's computer systems, each memory access costs tens of processor cycles. To overcome this gap, modern processors adopt up to several megabytes of SRAM as the cache, which can be accessed in just one or two processor cycles.

Recognizing the widening gap between the CPU speed and the DRAM speed, the importance of the cache behavior in the design of main memory indexes was emphasized. It was shown that the cache-conscious search trees (“CSS-trees”) perform lookups much faster than binary search trees and T-trees in the read-only environment. B+-trees and their variants were shown to exhibit a reasonably good cache behavior.

For example, CSB+-trees (“Cache Sensitive B+-trees”) store child nodes contiguously in memory to eliminate most child pointers in the nodes except the first one. The location of the i-th child node is computed from that of the first child. Providing more room for keys in the node, this pointer elimination approach effectively doubles the fanout of a B+-tree. Given the node size in the order of the cache block size, the fanout doubling reduces the height of the B+-tree, which again leads to smaller number of cache misses during the tree traversal.

Note that such a pointer elimination technique does not provide much benefit in disk-based indexes where the fanout is typically in the order of a few hundreds and doubling the fanout does not lead to an immediate reduction in the tree height.

However, the pointer elimination technique cannot be directly applied to multidimensional index structures such as the R-tree, which have numerous application domains such as spatio-temporal databases, data warehouses, and directory servers. The data object stored in an R-tree are approximated by, so called, minimum bounding rectangles (“MBRs”) in the multidimensional index space, where each MBR is the minimal hyper-rectangle (i.e. 2-dimensional or higher-dimensional rectangle or box) enclosing the corresponding data object. Those skilled in the art would appreciate the MBR may be extended to a multi-dimensional shape including boxes or pyramids.

Typically, MBRs are much larger than pointers. Thus, pointer elimination alone cannot widen the index tree to reduce the tree height significantly. For example, when the 16-byte MBR is used for the two-dimensional key, the simple elimination of a 4-byte pointer provides at most 25% more room for the keys, and this increase is not big enough to make any significant difference in the tree height for the improved cache behavior. Therefore, there is a need for a scheme for improving cache behavior to in accessing multidimensional indexes to access main-memory database.

**1**B and **1**C are illustrations of the QRMBR technique of the present invention, according to various embodiments.

**2**B and **2**C are illustrations of the data structure of the CR-tree, according to various embodiments.

**14**B and **14**C are graphs showing the ratio of false hits incurred by quantization, according to various embodiments.

**16**B and **16**C are graphs showing the search time with varying quantization levels, according to various embodiments.

Recognizing that the MBR keys occupy most of index data in the multidimensional index, R-trees, various embodiments of the present invention achieve inexpensive compression of MBR keys to improve the index cache behavior. In one embodiment, for example, a novel tree structure, called “CR-Tree” (Cache-conscious R-Tree) is proposed, where the child nodes are grouped into a parent node so that each node occupies only a small portion of the data space of its parent node. In CR-tree, an MBR is represented relative to its parent MBR so that the coordinates of the resultant relative MBR have a fewer number of significant bits with many leading 0's. To further reduce the number of bits per MBR, the CR-tree also cuts off trailing insignificant bits by quantization.

In various embodiment, the analytical results and the experimental results agree showing that the compression technique can reduce the MBR size to less than one fourth of the uncompressed one, thereby increasing the fanout by more than 150%. A potential problem with the proposed technique is that the information loss by quantization may incur false hits, which have to be filtered out through a subsequent refinement step. However, requiring this refinement step itself is not a problem because it is needed in most multidimensional indexes, and it is possible to keep the number of false hits negligibly small by choosing the quantization level properly such that the cost of filtering out false hits can be paid off by the significant saving in cache misses.

Various embodiments also includes several alternative designs of CR-tree including whether to use the pointer elimination technique introduced in the CSB+-tree, whether to apply the proposed compression technique to leaf nodes or not, the choice of quantization levels, and the choice of node size. The experimental results show that all the resultant CR-tree variants significantly outperform the R-tree in terms of the search performance and the space requirement. The basic CR-tree that uses only the proposed technique performs search operations faster than the R-tree while performing update operations similarly to the R-tree and using less memory space. Compared with the basic CR-tree, most of CR-tree variants use less memory space with some algorithmic overhead.

In various embodiments, for example, a reference minimum bounding shape that encloses at least one minimum bounding shape may be identified from an index structure stored in memory. Each of the at least one minimum bounding shape may correspond to a data object associated with a leaf node of the index structure. Coordinates of a point of the at least one minimum bounding shape may be associated with a set of first values to produce a relative representation of the at least one minimum bounding shape. The set of first values may be calculated relative to coordinates of a reference point of the reference minimum bounding shape such that each of the set of first values comprises a first number of significant bits fewer than a second number of significant bits representing a second value associated with a corresponding one of absolute coordinates of the point.

In various embodiments, for example, the relative representation may be compressed using a finite level of quantization to produce a quantized representation of the at least one minimum bounding shape. Also disclosed are methods, systems and non-transitory computer-readable storage devices for accomplishing the same scheme as described above.

Various embodiments may be based on making the R-tree cache-conscious by compressing MBRs. An R-Tree is a height-balanced tree structure designed specifically for indexing multi-dimensional data objects in a database. It stores the minimum bounding rectangle (“MBR”) with 2 or higher dimension of an data object as the key in the leaf pages. Various embodiments of the present invention are also applicable to a variant of R-Tree called R*-Tree which improves the search performance by using a better heuristic for redistributing entries and dynamically reorganizing the tree during insertion. Those skilled in the art would appreciate that various embodiments of the present invention are readily applicable to other variants of R-Tree such as R+-Tree, Hilbert R-Tree, or an X-tree.

**0**.about.R**3**. **1**˜R**3** represented relative to the lower left corner of R**0**. These relative coordinates have a lesser number of significant bits than absolute coordinates. **1**˜R**3** quantized into 16 levels or four bits by cutting off trailing insignificant bits. The resultant MBR is called quantized MBR (“QRMBR”). Note that QRMBRs can be slightly larger than original MBRs.

One example embodiment of the present invention is an index tree, called CR-tree (“cache-conscious R-tree”), a R-tree variant that uses QRMBRs as index keys. The number of quantization levels may be the same for all the nodes in a CR-tree.

**201** indicating whether it is a leaf node or non-leaf node (internal node), the number of stored entries **202**, the reference MBR **203** that tightly encloses its entire child MBRs, a number of entries such as **204**. The reference MBR is used to calculate the QRMBRs stored in the node.

**211**, ptr **212**), where QRMBR **211** is a quantized relative representation of the child node MBR, and ptr **212** is the address of a child node.

**221**, ptr **222**), where QRMBR **221** is a quantized relative representation of the object MBR and ptr **222** refers to a data object. Typically, each of x and y coordinates are quantized into 256 levels or one byte.

For example, one goal of various embodiments of the present invention is to reduce the index search time in main memory databases, especially using multidimensional indexes. In disk-based indexes, the disk access cost is almost irrelevant to the node size when moderately sized but the memory access cost is nearly proportional to the node size. While disk-based indexes are designed such that the number of disk accesses is minimized, main memory indexes need to be designed such that the amount of accessed index data or c·N_{node access }is minimized, where c denotes the node size in cache blocks and N_{node access }denotes the number of accessed nodes.

In main memory indexes, the search time mainly consists of the key comparison time and the memory access time incurred by cache misses. If a cache miss occurs, the CPU has to wait until the missing data are cached. A cache miss can occur for three reasons: missing data, missing instructions, and missing TLB (table look-aside buffer) entries, which are needed to map a virtual memory address to a physical address. Therefore, the goal is expressed as minimizing

*T* _{index search} *≅T* _{key compare} *+T* _{data cache} *+T* _{TLB cache }

where T_{key compare }is the time spent comparing keys that are cached, T_{data cache }is the time spent caching data, T_{TLB cache }is the time spent caching TLB entries. For the purpose of illustration, the caching time for missing instructions is omitted because the number of instruction misses mostly depends on the compiler used and the caching time is hard to control.

Let c be the size of a node in cache blocks, and let N_{node access }be the number of nodes accessed processing a query. Let C_{key compare }be the key comparison cost per cache block and C_{cache miss }be the cost of replacing a cache block. Let C_{TLB miss }be the cost of handling a single TLB miss. When the size of a node is smaller than that of a memory page, each access to a node incurs at most one TLB miss. For the purpose of illustration, it is assumed that nodes have been allocated randomly and that no node and no TLB entry are cached initially. Then,

Since C_{cache miss }and C_{TLB miss }are constant for a given platform, it is possible to control three parameters: c, C_{key compare}, and N_{node access}. Among them, it is not expected to reduce C_{key compare }noticeably because the key comparison is generally very simple. In addition, C_{TLB miss }and C_{cache miss }typically have similar values. Therefore, the index search time mostly depends on c·N_{node access}.

It is observed that the amount of accessed index data can be best reduced by compressing index entries. c·N_{node access }can be minimized in three ways: changing the node size such that c·N_{node access }becomes minimal, packing more entries into a fixed-size node, and clustering index entries into nodes efficiently. The second is often termed as compression and the third as clustering.

The optimal node size is equal to the cache block size in one-dimensional case. In one-dimensional trees like the B+-tree, since exactly one internal node is accessed for each height even for the range query, the number of visited internal nodes decreases logarithmically in the node size. On the other hand, the number of visited leaf nodes decreases linearly with the node size, and c increases linearly with the node size. Therefore, c·N_{node access }increases with the node size, and thus it is minimal when c is one.

In multidimensional indexes, more than one internal nodes of the same height can be accessed even for the exact match query, and the number of accessed nodes of the same height decreases as the node size increases. Since this decrease is combined with the logscale decrease of tree height, there is a possibility that the combined decrease rate of node accesses exceeds the linear increase rate of c. It will be shown analytically that the optimal node size depends on several factors like the query selectivity and the cardinality (the number of entries in the index structure).

Compressing index entries is equivalent to increasing the node size without increasing c. In other words, it reduces N_{node access }while keeping c fixed. Thus, it is highly desirable. Compression has been addressed frequently in disk-based indexes because it can reduce the tree height, but there is little dedicated work, especially in multidimensional indexes. The following analysis shows why compression is not important in disk-based indexes but is important in main memory indexes.

Suppose that the tree A can pack f entries on average in a node and the tree B can pack **2** *f *entries in a node using a good compression scheme. Then, their expected height is log_{f }N and log_{2f }N, respectively. Thus, the height of B is 1/log_{2 }f+1 (=log_{f }N/log_{2f }N) times smaller than that of A. In disk-based indexes, the typical size of a node varies from 4 KB to 64 KB. Assuming that the node size is 8 KB and nodes are 70% full, f is 716 (≅8192×0.7/8) for a B+-tree index and about 286 (≅8192×0.7/20) for a two-dimensional R-tree. Thus, 1/log_{2 }f is typically around 0.1. On the other hand, the size of a node is small in main memory indexes. With a node occupying two cache blocks or 128 bytes, f is about 11 for a B+-tree and about 4 for a two-dimensional R-tree. Thus, 1/log_{2 }f is 0.29 for the B+-tree and 0.5 for the R-tree. In summary, node compression can reduce the height of main memory indexes significantly because the size of nodes is small.

Clustering has been studied extensively in disk-based index structures. In terms of clustering, the B+-tree is optimal in one-dimensional space, but no optimal clustering scheme is known for the multidimensional case. Instead, many heuristic schemes have been studied in various multidimensional index structures. Various embodiments of the present invention may be used with most of these clustering schemes.

MBR Compression

There are two desirable properties for a MBR compression scheme. One is the property of overlap check without decompression. Whether two MBRs overlap or not can be determined directly from the corresponding compressed MBRs, without decompressing them. A basic R-tree operation is to check whether each MBR in a node overlaps a given query rectangle. Therefore, when storing compressed MBRs in a node, this property allows the R-tree operation to be performed by compressing the query rectangle once instead of decompressing all the compressed MBRs in the node.

The other property is simplicity. Compression and decompression should be computationally simple and can be performed only with already cached data. Conventional lossless compression algorithms as the one used in the GNU gzip program are expensive in terms of both computation and memory access because most of them maintain an entropy-based mapping table and look up the table for compression and decompression. Although they may be useful for disk-based indexes, they are not adequate for main memory indexes.

RMBR Technique

One way to compress to represent keys relatively to a reference MBR within a node. If the coordinates of an MBR are represented relative to the lower left corner of its parent MBR, the resultant relative coordinates have many leading 0's. In the relative representation of MBR (“RMBR”), cutting off these leading 0's make it possible to effectively reduce the MBR size.

Let P and C be MBRs, which is represented by their lower left and upper right coordinates (xl, yl, xh, yh), and let P enclose C. Then, the relative representation of C with respect to P has the coordinates relative to the lower left corner of P.

*RMBR* _{P}(*C*)=(*C·xl−P·xl,C·yl−P·yl,C·xh−P·xl,C·yh−P·yl*)

However, the following simple analysis shows that the RMBR technique can save only about 32 bits per MBR. For simplicity, it is assumed that the coordinates of MBR are uniformly distributed in their domain and that R-tree nodes of the same height have square-like MBRs roughly of the same size. Without loss of generality, it is assumed that the domain of x coordinates has the unit length and consists of 2^{32 }different values equally spaced.

Let f be the average fanout of leaf nodes, and let N be the total number of data objects. Then, there are roughly N/f leaf nodes, whose MBRs have the area of f/N and the side length of √{square root over (f/N)} along each axis. Since there are 2^{32 }different values in the unit interval along each axis, there are 2^{32 }√{square root over (f/N)} different values in the interval with the length of √{square root over (f/N)}. Therefore, it is possible to save 32−log_{2 }(2^{32 }√{square root over (f/N)}) bits or log_{2 }√{square root over (N/f)} bits for each x coordinate value. When N is one million and f is 11, about 8.2 bits are saved. By multiplying by 4, it is possible to save about 32 bits per MBR. Note that the number of saved bits does not depend on the original number of bits as long as the former is smaller than the latter.

It is possible to easily extend this analysis result such that the number of bits saved is parameterized further by the dimensionality. The extended result is log 2

Formula (1) increases logarithmically in N, decreases logarithmically in f, but decreases linearly with d. Therefore, the number of saved bits mainly depends on the dimensionality. In one-dimensional space, the relative representation technique can save almost 16 bits for each scalar, but it becomes useless as the dimensionality increases.

QRMBR Technique

In addition to the RMBR technique, quantization may be performed as an additional step for further compression. In the quantized RMBR (“QRMBR”) technique, the quantization step cuts off trailing insignificant bits from an RMBR whereas the RMBR technique cuts off leading non-discriminating bits from an MBR. It is shown below that quantizing an RMBR does not affect the correctness of index search, and that its small overhead by quantization is justified by a significant saving in cache misses.

Let I be a reference MBR, and let l be a desired number of quantization levels. Then, the corresponding quantized relative representation of an MBR, C, is defined as

The following Lemma says that QRMBR satisfies the first of two desirable properties. Therefore, the computational overhead of QRMBR technique is the cost of compressing the query rectangle into a QRMBR for each visited node. In the present implementation, compressing an MBR into a QRMBR consumes at most about 60 instructions, which corresponds to less than 120 ns on a 400 MHz processor because of pipelining. In addition, it incurs no memory access as long as the query MBR and the MBR of the node on immediate access are cached.

Lemma 1:

Let A and B be MBRs. For any MBR I and integer l, it holds that if QRMBR_{I,l}(A) and QRMBR_{I,l}(B) do not overlap, A and B also do not overlap.

Proof:

It is proved by proving the contrapositive that if A and B overlap, QRMBR_{I,l}(A) and QRMBR_{I,l}(B) overlap. By definition, two rectangles overlap if and only if they share at least one point. Thus, A and B share at least one point. Let (x, y) denote this point. Then, the following holds.

*A·xl≦x≦A·xh, A·yl≦y≦A·yh *

*B·xl≦x≦B·xh, B·yl≦y≦B·yh *

For simplicity, the subscripts a, b, and l are omitted from the quantization functions φ and Φ. Since, φ and Φ are monotonically non-decreasing functions and φ(r)≦Φ(r) for any rεR,

φ(*A·xl*)≦φ(*x*)≦Φ(*x*)≦Φ(*A·xh*), φ(*A·yl*)≦φ(*y*)≦Φ(*y*)≦Φ(*A·yh*)

φ(*B·xl*)≦φ(*x*)≦Φ(*x*)≦Φ(*B·xh*), φ(*B·yl*)≦φ(*y*)≦Φ(*y*)≦Φ(*B·yh*)

Thus, QRMBR_{I,l}(A) and QRMBR_{I,l}(B) share at least the point (φ(x), φ(y)). Hence, they overlap, which completes the proof.

Since it is generally not possible to recover the original coordinates of an MBR from its QRMBR, there is the possibility of incorrectly determining the overlap relationship between two MBRs. However, Lemma 1 guarantees that there is no possibility of saying two actually overlapping MBRs do not overlap. Thus, the QRMBR technique does not miss a data object that satisfies a query.

However, there is still a possibility that two actually non-overlapping MBRs may overlap. This means that the result of index search may contain false hits that have to be filtered out through a subsequent refinement step. This refinement step is needed for most multidimensional index structures because it is often the case that MBRs are not exact keys of data objects. Thus, requiring the refinement step itself is not an overhead, but the number of false hits can be. The number of false hits can be made negligibly small, such as fewer than one percent, by choosing the quantization level properly.

CR-Tree According to Various Embodiments of the Present Invention

Search Procedure:

Given a CR-tree and a query rectangle Q, find all index records whose QRMBRs overlap Q.

1. Push the root node to the initially empty stack S (step **301**).

2. If S is empty (step **302**), return the result set (step **303**) and stop (step **304**).

3. If S is not empty (step **302**), pop a node N from S (step **305**) and set R to be QRMBR_{N.MBR,l}(Q) (step **306**).

4. If N is not a leaf node (step **307**), check each entry E to determine whether E.QRMBR overlaps R. If so, push E.ptr to S (step **308**).

5. If N is a leaf node (step **307**), check all entries E to determine whether E.QRMBR overlaps R. If so, add E.ptr to the result set (step **309**).

6. Repeat from step 2.

_{N.MBR,l}(C), p) and appending it to N. If N.MBR has been enlarged, recalculate all the QRMBRs in N by accessing their actual MBRs and invoke the AdjustTree Procedure passing N.

To insert a new data object (step **315**), the CR-tree descends itself from the root by choosing the child node that needs the least enlargement to enclose the new key of the object MBR. If the node's MBR enclose the new key (step **316**), a relative key is calculated for the new entry (step **317**). If the node's MBR does not enclose the new key (step **316**), the node's MBR must be enlarged to enclose the new key (step **318**).

When visiting an internal node to choose one of its children, the object MBR is first transformed into the QRMBR using the node MBR as the reference MBR. Then, the enlargement is calculated between a pair of QRMBRs. Relative keys are calculated for all entries (step **319**). When a leaf node is reached, the node MBR is first adjusted such that it encloses the object MBR. Then, an index entry for the data object is created in the node. If the node MBR has been adjusted, the QRMBRs in the node are recalculated because their reference MBR has been changed.

If the node overflows (step **312**), it is split (step **313**) and the split propagates up the tree.

**332**), the insert key is made relative to the node's MBR (step **335**) and a child node is selected that needs the minimum enlargement to enclose the relative insert key (step **336**). The process is repeated until the leaf node is reached (step **333**).

The pair of farthest entries is chosen as seeds (step **341**). Two nodes are made and each seed is assigned to a node (step **342**). Pick any of remaining entries (step **344**) and assign the entry to the node that requires the least enlargement of its MBR to include the chosen entry (step **347**). If one node has (M-m) entries (step **345**) where M is the maximum number of entries in a node and m is a predefined minimum number of entries in a node, all the remaining entries should be assigned to the other node (step **346**). This step makes each node have at least m entries. If all the entries are assigned (step **343**), the MBR of each node is obtained, and the relative keys in each node are calculated (step **348**).

If the node under split is the root, a new root is made and two split nodes are added to the root as its children (step **349**). MBRs and relative keys of the root are recalculated (step **351**). If the node under split is not the root and if the parent node is full (step **352**), the parent node is split (step **353**). If the parent node is not full (step **352**) and if the parent node's MBR enclose the new key (step **355**), relative keys are recalculated only for two new entries (step **376**). If the parent node's MBR does not enclose the new key (step **355**), the parent node's MBR is enlarged to enclose the new key (step **357**), and the relative keys in the node are recalculated for all entries (step **358**), and the tree is adjusted (step **359**).

**371**). If the enlarged node is not the root and if the parent node's MBR enclose the node's MBR (step **373**), the parent node's entries are updated and the relative keys are recalculated (step **374**). If the parent node's MBR does not enclose the node's MBR (step **373**), the parent node's MBR is enlarged to enclose the enlarged node's MBR (step **375**). The parent node's entries are updated and all relative keys of the parent node are recalculated (step **377**), and the parent node is set to be an enlarged node (step **378**).

**386**). If the node is a leaf node (step **387**), each entry within the node is compared with the delete object (step **389**). If any entry matches the delete object (step **390**), the entry is deleted from the leaf node (step **391**). If the deleted entry's key touches the leaf node's MBR (step **393**), each entry's relative key is recalculated (step **394**), and the tree is condensed (step **395**).

First, the parent's entry of a shrunk node is found step (**401**). If the entry's key does not touch the parent node's (step **402**), the shrunk node's entry in the parent node is updated (step **409**) and stopped (step **410**). If the entry's key touches the parent node's MBR (step **402**), the entry's key and the shrunk node's MBR are compared (step **403**). If the parent node may not be shrunk (step **404**), the shrunk node's entry in the parent node is updated (step **405**). But, if the parent node can be shrunk (step **404**), the shrunk node's entry in the parent node is updated (step **405**). The parent node's MBR is recalculated (step **406**) and the relative key of each entry in the parent is also recalculated (step **407**). The parent node is set as a shrunk node (step **408**).

Those skilled in the art appreciate that any of the deletion algorithms used in the R-tree and the R*-tree may also be used with a slight modification.

Bulk Loading

Bulk loading into a CR-tree is no different from that into other R-tree variants. As long as QRMBRs are correctly maintained, existing bottom-up loading algorithms can be used directly.

CR-Tree Variants

**4**B and **4**C show three variants of the CR-tree according to various embodiments of the present invention, namely, the PE CR-tree, the SE CR-tree, and the FF CR-tree.

**421** for indicating whether the node is a leaf node or a non-leaf (internal) node, a field **422** for indicating the number of entries in the node, a field **423** for storing a reference MBR, a field **424** for storing a pointer to a child node, and fields such as **425** for storing QRMBR.

The PE CR-tree widens the CR-tree significantly by eliminating most of the pointers because the key size of the CR-tree is smaller than the R-tree, for example. If the QRMBR size is four bytes, this extension doubles the fanout of internal nodes when the pointer is 4 bytes.

It is noted that the pointers to data objects stored in leaf nodes can rarely be eliminated. When the average fanout of both internal and leaf nodes is 10, the number of internal nodes is about one ninth of that of leaf nodes. Therefore, the overall increase of fanout is only about 10%.

On the other hand, since the pointer elimination technique works by storing the child nodes with the same parent consecutively, splitting a node becomes expensive. The new node created by a split has to be stored consecutively with its siblings, and this often requires allocating a new space and moving the siblings into the space.

The figure shows a structure of a node except the root note. It includes a field **431** for indicating whether the node is a leaf or non-leaf (internal) node, a field **432** for indicating the number of entries in the node, a field **433** for storing a pointer to a child node, and fields such as **434** for storing QRMBR. Note that the reference MBR is not present in each node, except the root node.

The SE CR-tree allows the fanout of internal nodes to increase by four and that of leaf nodes by two when the MBR size is 16 bytes and the QRMBR size is 4 bytes. This increase in fanout could be larger than the increase obtained in the PE CR-tree when the size of a node is as small as one or two cache blocks.

The figure shows the structure of a leaf node where object's MBR **441** is stored in the original, non-relative format, together with a pointer to an object **442**. The FF CR-tree is useful when the subsequent refinement step is extremely expensive. For example, when complex boundary polygons of administrative districts are indexed by their MBRs, the refinement step of comparing the given query shape with the actual shape of data objects obtained by searching an index can be expensive.

Table 1 shows the space requirements for the various embodiments of index structures according to various embodiments of the present invention, where N is the number of leaf node entries and S is the size of a node in bytes. It is assumed that the size of MBR is 16 bytes, the size of QRMBR is 4 bytes, and the size of the pointer is 4 bytes. The typical index sizes are calculated when N is 1,000,000 and S is 128 assuming that the nodes are 70% full. Note that the PE R-tree is an extension of R-tree as a result of applying the pointer elimination technique. The internal node space is calculated by dividing the leaf space by the average fanout of internal nodes minus one. This analysis shows that the PE CR-tree is not so different from the CR-tree in terms of the space requirement and the PE R-tree is no different from the R-tree.

TABLE 1 | |||||

Maximum fanout | Node space | Typical | |||

Tree type | Internal | Leaf | Internal | Leaf | index size |

R-tree | m | m | NS/0.7m(0.7m − 1) | NS/0.7m | 38.15 MB |

PE R-tree | 1.25m | m | NS/0.7m(0.875m − 1) | NS/0.7m | 35.90 MB |

CR-tree | 2.5m − 4 | 2.5m − 4 | NS/1.75m − 2.8)(1.75m − 1.8) | NS/(1.75m − 2.8) | 17.68 MB |

PE CR-tree | 5m − 5 | 2.5m − 4 | NS/(1.75m − 2.8)(3.5m − 2.5) | NS/(1.75m − 2.8) | 16.71 MB |

SE CR-tree | 5m − 1 | 2.5m − 2 | NS/1.75m(3.5m − 0.7) | NS/(1.75m − 1.4) | 14.07 MB |

FF CR-tree | 2.5m − 4 | m | NS/0.7m(1.75m − 2.8) | NS/0.7m | 32.84 MB |

Analytical Results

Without loss of generality, a data domain of unit hyper-square is assumed. For simplicity, it is assumed that data objects are uniformly distributed in the domain, and the query MBRs are hyper-squares. It is further assumed that the R-tree nodes of the same height have square-like MBRs roughly of the same size like other analytical work. Mathematica 3.0 was used to perform the numerical computation needed to compare the analytical results visually.

Let h denote the height or level of a node assuming that the height of leaf nodes is one. Let M_{h }denote the number of nodes with the height of h. From the above assumption,

Let a_{h }denote the average area that a node of height h covers. Then, a_{h }is 1/M_{h}. Using the Minkowski sum technique, the probability that a node of height h overlaps a given query rectangle is

where s denotes the size of the query rectangle. Then, the number of height-h nodes that overlap the query rectangle is

By summing this equation from the leaf to the root, the total number of nodes accessed in R-trees is

On the other hand, the CR-tree compares QRMBRs in order to determine whether to visit a node or not while the R-tree compares MBRs. Since a QRMBR is larger than its original MBR by the length of a quantization cell on average, the number of node accesses increases a bit in the CR-tree.

Let l denote the number of quantization levels. Then, each node has l^{d }quantization cells, and the side length of each cell is

where h denotes the height of the node. Since whether to visit a child node is determined by comparing the QRMBR of the query rectangle and the stored QRMBR of the child node, the probability to visit a child node is

By multiplying by M_{h }and summing from the leaf to the root, the total number of nodes accessed in CR-trees is

The analytical result shows that the number of accessed nodes decreases with increasing the node size. The decreasing rate is initially large, but it becomes smaller as the node size increases. For all the node sizes and all the three dimensionality, the CR-tree surpasses the R-tree by more than twice.

Number of Cache Misses

The number of cache misses can easily be calculated by multiplying equations (2) and (3) by the number of cache misses that a single node access incurs. To obtain the results, the equations (2) and (3) were multiplied by S/64, where S is the node size in bytes.

Although the optimal one-dimensional node size in terms of the number of cache misses is shown to be the cache block size mentioned above,

Ratio of False Hits By Quantization

Each quantization cell of a leaf node has the area of f/l^{d }N and the side length of

along each axis, and the probability that the QRMBRs of the query MBR and the object MBR overlap is

Therefore, the probability that a false hit occurs is

Dividing by

the ratio of false hits incurred by quantization to actual answers is

Experimental Results

To confirm the merits of the CR-tree according to various embodiments of the present invention, a series of experiments were conducted on a SUN UltraSPARC platform (400 MHz CPU with 8 MB L2 cache) running Solaris 2.7.

Six index structures were implemented: the original R-tree, the PE R-tree, the CR-tree, the PE CR-tree, the SE CR-tree, and the FF CR-tree. A bulk-loading algorithm was also implemented. The size of nodes was changed from 64 bytes to 1024 bytes inside the index structures implemented. 16-byte MBRs were used and the size of QRMBRs was changed from 2 bytes to 8 bytes. Unless specified, the default size of QRMBRs is 4 bytes, and the nodes are 70% full.

Two synthetic data sets were generated, which consist of one million small rectangles located in the unit square. One is uniformly distributed in the unit square while the other has a Gaussian distribution around the center point (0.5, 0.5) with the standard deviation of 0.25. The average side length of rectangles is set to be 0.001.

Search Performance

The search performances of various index trees according to various embodiments of the present invention were compared in terms of the wall-clock time spent processing a two-dimensional region query. 10,000 different query rectangles of the same size are generated, whose center points are uniformly distributed. The size of query rectangles was changed from 0.01% of the data space to 1%. Since the data space is the unit square, the query selectivity is roughly same as the size of a query rectangle.

The CR-tree, the PE CR-tree, and the SE CR-tree form the fastest group. The R-tree and the PE R-tree form the slowest group. The FF CR-tree lies between the two groups.

Although the SE CR-tree is wider than both the CR-tree and the PE CR-tree, it performs worse. This is because the SE CR-tree calculates the reference MBR of a node from the matching entry in its parent node. In the present implementation, this calculation involves about 40 instructions and 16 bytes of memory write.

Update Performance

To measure the update performance, 100,000 data objects were inserted into trees bulk-loaded with the 1M uniform data set, then removed 100,000 randomly selected data objects from the trees.

When descending a tree for insertion, the child node that needs to be enlarged least is selected. Since the enlargement calculation consumes about 30 instructions in the present implementation, it becomes more expensive than the cache miss in the CR-tree and its variants. Since a single cache block contains about 5.6 QRMBRs in the CR-tree, the enlargement calculation cost is about 168 instructions per cache block, but a cache miss consumes about 80.about.100 processor cycles on 400 MHz UltraSPARC II. On the other hand, since insertion accesses only one node for each height, the number of accessed nodes decreases logarithmically in the fanout, but the number of enlargement calculations for each node increases linearly with the fanout. Thus, the total number of enlargement calculations increases with the fanout.

The PE R-tree performs slightly worse than the R-tree because it increases the fanout by less than 25%. Since the fanout of the CR-tree is about 150% larger than that of the R-tree, it performs worse than the R-tree for a given node size. Since the fanout of the PE CR-tree is about 400% larger than that of the R-tree, it performs significantly worse than the R-tree for a given node size. On the other hand, when the fanout is same, the ranking of the CR-tree is determined by the saving in cache misses and the overhead of updating QRMBRs when the node MBR grows or shrinks.

Impact of Quantization Levels

To assess the effect of a quantization level, the ratio of false hits incurred by quantization, the quantization error in terms of the MBR size, and the search time for three different quantization levels, 2^{4}, 2^{8}, and 2^{16 }were measured. These correspond to QRMBRs of 2 bytes, 4 bytes, and 8 bytes, respectively. The experiment used the trees bulk-loaded with the 1M uniform data set.

**14**B and **14**C show the ratios of false hits measured for different quantization levels. It was shown above that the false hit ratio can be estimated by (1+2√{square root over (f/l^{2 }N)}/(√{square root over (s)}+√{square root over (a)}))^{2}−1. The false hit ratio increases with the fanout or the size of a node, and decreases with the quantization level and query selectivity. The measured results agree with the analytical results. When quantized into 16 bits, the search result is almost equal to the exact answer for a query. When quantized into 8 bits, the search result contains at most 1% more data objects compared with the exact answer. Quantizing into 4 bits can be used only when the query selectivity is high.

^{2 }N) for leaf nodes and a QRMBR extends its original MBR by the cell length along each axis. Thus, the size of QRMBR increases with the fanout or the node size, and decreases with the quantization level. The measured results agree with the analytical results. When quantized into 16 bits, the size of MBR increases by less than 0.01%. When quantized into 8 bits, the size of MBR can increase by 1˜7% depending on the node size, but this does not lead to the same increase in the size of search result as shown in **15**B and **15**C.

**16**B and **16**C show the search time measured with varying quantization levels. These figures show that a coarse quantization can result in the increase of the search time when the query selectivity is low and the size of a node is large. This is because of a significant number of false hits. In sum, it is possible to quantize into fewer bits as the query selectivity and the cardinality grows, but it is necessary to quantize into more bits as the size of node grows.

It is also observed that the cache hit ratio increases with the node size. This can be explained by the cache replacement policy of processors. Since memory blocks are mapped to cache blocks circularly (for example, by the physical address modulo the cache size), a cached memory block is not replaced by consecutive memory blocks. As the node size increases, the portion of accesses to consecutive memory blocks increases, and thus, the cache hit ratio increases subsequently.

Concurrency Control

In order to keep the performance improvement by the CR-tree significant, a matching optimization of index concurrency control schemes is needed.

Since the conventional hash-based lock and unlock operation is too expensive for main memory databases, a faster latch and unlatch operation has been proposed. By allocating latch data structures statically and making them directly addressable without a hash, the latch and unlatch operation uses about 20 CISC (IBM 370) instructions, which may correspond to about a hundred RISC instructions. However, the latch and unlatch operation is still very expensive for concurrency control of main memory index trees because the present experiment with the CSB+-tree and the well-known lock coupling technique shows that each node is locked for only about 40 processor cycles.

To prevent locking operations from incurring additional cache misses, the data structures for locking needs to be kept within matching index nodes. To make it possible, the data structure for locking should be as small as possible. For example, the present proposal uses only one byte from each node.

Even if a lock conflict occurs, it will be resolved typically within tens of or hundreds of clock cycles. Therefore, spinning for the lock may be employed instead of blocking, which incurs a context switch consuming up to thousands of instructions.

In addition to making a locking operation cheap, it is desirable to reduce the number of locking operations. This is possible by giving a favor to searching in lookup-intensive applications such as in directory servers. For example, it is possible to make a search operation lock the entire tree if no update is in progress or make a search operation lock nothing by using a versioning technique.

Recovery

Since various embodiments of the present invention reduce the index size by almost 60%, the checkpointing and post-crash restart processes may be accelerated, for example, by reducing the disk access time.

In main memory database systems, the durability of transactions is achieved through logging and occasional checkpointing. Checkpointing in main memory databases is the process of saving a snapshot of the memory-resident database onto a disk. The post-crash restart process consists of loading the latest snapshot and replaying log records generated after the latest checkpointing. Therefore, it is clear that the disk access time during checkpointing and restart decreases as indexes shrink in size. For example, the disk access time for the CSB+-tree decreases by 10% compared with the B+-tree, and the disk access time for the CR-tree decreases to less than half compared with the R-tree.

While the invention has been described with reference to various embodiments, it is not intended to be limited to those embodiments. It will be appreciated by those of ordinary skill in the art that many modifications can be made to the structure and form of the described embodiments without departing from the spirit and scope of the invention.

Citas de patentes

Patente citada | Fecha de presentación | Fecha de publicación | Solicitante | Título |
---|---|---|---|---|

US5594844 | 25 Ene 1994 | 14 Ene 1997 | Hitachi, Ltd. | Three dimensional view using ray tracing through voxels subdivided numerically using object based parameters |

US5701467 * | 1 Jul 1994 | 23 Dic 1997 | European Computer-Industry Research Centre Gmbh | Computer data storage management system and methods of indexing a dataspace and searching a computer memory |

US5963956 | 27 Feb 1997 | 5 Oct 1999 | Telcontar | System and method of optimizing database queries in two or more dimensions |

US6014466 | 10 Jul 1997 | 11 Ene 2000 | Hughes Electronics Corporation | Object-based video coding of arbitrarily shaped objects using lapped orthogonal transforms (LOTs) defined on rectangular and L-shaped regions |

US6134541 | 31 Oct 1997 | 17 Oct 2000 | International Business Machines Corporation | Searching multidimensional indexes using associated clustering and dimension reduction information |

US6148295 | 30 Dic 1997 | 14 Nov 2000 | International Business Machines Corporation | Method for computing near neighbors of a query point in a database |

US6154746 | 22 Abr 1998 | 28 Nov 2000 | At&T Corp. | High-dimensional index structure |

US6252605 | 1 Ago 1997 | 26 Jun 2001 | Garmin Corporation | System and method for packing spatial data in an R-tree |

US6263334 | 11 Nov 1998 | 17 Jul 2001 | Microsoft Corporation | Density-based indexing method for efficient execution of high dimensional nearest-neighbor queries on large databases |

US6308177 * | 28 Jul 1999 | 23 Oct 2001 | Vijaya S. Israni | System and method for use and storage of geographic data on physical media |

US6381605 * | 29 May 1999 | 30 Abr 2002 | Oracle Corporation | Heirarchical indexing of multi-attribute data by sorting, dividing and storing subsets |

US6415227 * | 7 Abr 2000 | 2 Jul 2002 | American Gnc Corporation | Enhanced global positioning system and map navigation process |

US6427147 * | 28 Ago 1998 | 30 Jul 2002 | Sand Technology Systems International | Deletion of ordered sets of keys in a compact O-complete tree |

US6438269 | 25 Ene 1999 | 20 Ago 2002 | Korea Telecom | Method for multi-step filtering spatious objects by utilizing MMP filter in spatial database system |

US6470344 * | 27 Ago 1999 | 22 Oct 2002 | Oracle Corporation | Buffering a hierarchical index of multi-dimensional data |

US6601062 | 27 Jun 2000 | 29 Jul 2003 | Ncr Corporation | Active caching for multi-dimensional data sets in relational database management system |

US6636846 | 28 Abr 2000 | 21 Oct 2003 | International Business Machines Corporation | Method for providing a system maintained materialized functionally dependent column for a database management system |

US6654760 | 4 Jun 2001 | 25 Nov 2003 | Hewlett-Packard Development Company, L.P. | System and method of providing a cache-efficient, hybrid, compressed digital tree with wide dynamic ranges and simple interface requiring no configuration or tuning |

US6671694 | 4 Jun 2001 | 30 Dic 2003 | Hewlett-Packard Development Company, L.P. | System for and method of cache-efficient digital tree with rich pointers |

US6691126 * | 14 Jun 2000 | 10 Feb 2004 | International Business Machines Corporation | Method and apparatus for locating multi-region objects in an image or video database |

US6801661 | 15 Feb 2001 | 5 Oct 2004 | Eastman Kodak Company | Method and system for archival and retrieval of images based on the shape properties of identified segments |

US6868410 | 5 Jun 2001 | 15 Mar 2005 | Stephen E. Fortin | High-performance location management platform |

US6982710 | 7 Ene 2002 | 3 Ene 2006 | Interuniversitair Micro-Elektronica Centrum (Imec Vzw) | System and method to obtain surface structures of multi-dimensional objects, and to represent those surface structures for animation, transmission and display |

US7072891 | 22 Mar 2002 | 4 Jul 2006 | Korea Advanced Institute Of Science & Technology | Apparatus and method for hyper-rectangle based multidimensional data segmentation and clustering |

US7080065 | 22 Jun 2001 | 18 Jul 2006 | Oracle International Corporation | Query pruning using interior rectangles in an R-tree index |

US7283987 | 1 Mar 2002 | 16 Oct 2007 | Sap Ag | Compression scheme for improving cache behavior in database systems |

US7640516 | 1 Mar 2006 | 29 Dic 2009 | Hewlett-Packard Development Company, L.P. | Arranging graphic objects on pages |

US7797296 | 4 Oct 2007 | 14 Sep 2010 | Sap Ag | Compression scheme for improving cache behavior in database systems |

US8121987 | 30 Jul 2010 | 21 Feb 2012 | Sap Ag | Compression scheme for improving cache behavior in database systems |

US20020004710 | 20 Abr 2001 | 10 Ene 2002 | Ibm | Shape analysis system, difference detection system for three-dimensional model, similar shape search system, shape analysis method, difference detection method for three-dimensional model, siomilar shape search method, and storage medium and program transmission apparatus therefor |

US20020188581 | 5 Jun 2001 | 12 Dic 2002 | Fortin Stephen E. | High-performance location management platform |

US20040119611 | 1 Jul 2003 | 24 Jun 2004 | Byoung-Woo Oh | System and method for progressive spatial data service |

US20050137994 | 18 Ene 2005 | 23 Jun 2005 | Fortin Stephen E. | High-performance location management platform |

US20080033989 | 4 Oct 2007 | 7 Feb 2008 | Sap Ag. | Compression scheme for improving cache behavior in database systems |

US20100287144 | 30 Jul 2010 | 11 Nov 2010 | Sap Ag | Compression scheme for improving cache behavior in database systems |

Otras citas

Citada por

Patente citante | Fecha de presentación | Fecha de publicación | Solicitante | Título |
---|---|---|---|---|

US9069809 | 18 Nov 2013 | 30 Jun 2015 | Sap Se | Compression scheme for improving cache behavior in database systems |

US9772848 * | 14 Nov 2014 | 26 Sep 2017 | Intel Corporation | Three-dimensional morton coordinate conversion processors, methods, systems, and instructions |

US20160139929 * | 14 Nov 2014 | 19 May 2016 | Intel Corporation | Three-dimensional morton coordinate conversion processors, methods, systems, and instructions |

Clasificaciones

Clasificación de EE.UU. | 707/705, 707/797, 707/802, 707/791, 707/796 |

Clasificación internacional | G06F12/00, G06F7/00, G06F17/30 |

Clasificación cooperativa | G06F17/30333, Y10S707/99933, Y10S707/99931, Y10S707/99942, Y10S707/99932, G06F17/30321, G06F17/30327 |

Eventos legales

Fecha | Código | Evento | Descripción |
---|---|---|---|

26 Ago 2014 | AS | Assignment | Owner name: SAP SE, GERMANY Free format text: CHANGE OF NAME;ASSIGNOR:SAP AG;REEL/FRAME:033625/0334 Effective date: 20140707 |

9 May 2017 | FPAY | Fee payment | Year of fee payment: 4 |

Girar