WO2013100302A1 - Patch method using memory and temporary memory and patch server and client using the same - Google Patents

Patch method using memory and temporary memory and patch server and client using the same Download PDF

Info

Publication number
WO2013100302A1
WO2013100302A1 PCT/KR2012/006613 KR2012006613W WO2013100302A1 WO 2013100302 A1 WO2013100302 A1 WO 2013100302A1 KR 2012006613 W KR2012006613 W KR 2012006613W WO 2013100302 A1 WO2013100302 A1 WO 2013100302A1
Authority
WO
WIPO (PCT)
Prior art keywords
patch
memory
data
file
size
Prior art date
Application number
PCT/KR2012/006613
Other languages
French (fr)
Inventor
Sung Gook Jang
Kwang Hee Yoo
Joo Hyun Sung
Hye Jin Jin
Yoon Hyung Lee
Original Assignee
Neowiz Games Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neowiz Games Co., Ltd. filed Critical Neowiz Games Co., Ltd.
Publication of WO2013100302A1 publication Critical patent/WO2013100302A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • G06F8/65Updates
    • G06F8/658Incremental updates; Differential updates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs

Definitions

  • the present invention relates generally to a patch technology and, more particularly, to a patch method using memory and temporary memory which is capable of patching a large amount of data more rapidly and reliably, and a patch server and client using the patch method.
  • a conventional patch technique includes a patch method using information about the version of a patch. For example, there is a method of a patch client accessing a patch server, comparing a current patch version with the patch version of the patch server, and, if patching is necessary, downloading and storing corresponding content.
  • the conventional patch method is problematic in that the resources of the patch server and the patch client are inefficiently used and a bottleneck occurs on the server because patch redundancy may occur if there is an error in information about the version of a patch or the patching is partially performed.
  • an object of the present invention is to apply a patch more rapidly and efficiently by maximizing the utilization of the resources of a patch client using an improved patch algorithm.
  • Another object of the present invention is to patch a patch file having a high capacity rapidly and reliably.
  • Still another object of the present invention is to apply a patch more rapidly using an optimized patch algorithm depending on the size of the data to be patched.
  • Yet another object of the present invention is to apply a patch in a resource-efficient and error-tolerable manner by modifying only an erroneous part, rather than the entire file to be patched, if an error occurs during the patch process.
  • the present invention provides a patch method, the patch method being performed in a patch client, the patch client being connectable to a patch server and including a storage device and memory, the patch method including the steps of (a) accessing the patch server and receiving patch data from the patch server; (b) calculating an available space of the memory; (c) if a size of the patch data is smaller than or equal to the available space of the memory, performing patching using the available space of the memory; and (d) if the size of the patch data is greater than the available space of the memory, allocating temporary memory of a capacity, corresponding to the size of the patch data, to the storage device, and performing patching using the allocated temporary memory.
  • the present invention provides a patch method, the patch method being performed in a patch client, the patch client being connectable to a patch server and including a storage device and memory, the patch method including the steps of (a) accessing the patch server and receiving patch data, including a plurality of files to be patched, from the patch server; (b) calculating an available space of the memory; (c) if at least one of the plurality of files to be patched is smaller than the available space of the memory, patching the at least one file using the available space of the memory; and (d) if at least one of the plurality of files to be patched is greater than the available space of the memory, patching the at least one file using temporary memory allocated to the storage device.
  • the present invention provides a patch server, the patch server being connected to a patch client and providing patch data, the patch server including memory; a hash generation unit configured to generate at least one hash value for received data; and a control unit configured to load an original file and a patch file into the memory, to control the hash generation unit so that the hash generation unit compares the loaded original file with the loaded patch file and generates at least one hash value for a difference, to generate a patch table including the generated hash value, and to generate the patch data including the generated patch table.
  • the present invention provides a patch client, the patch client being able to use memory and a storage device, and accessing a patch server, receiving patch data and performing patching, the patch client including a control unit for comparing a size of the received patch data with an available space of the memory, performing the patch using the memory if the available space of the memory is equal to or greater than the size of the received patch data, and allocating temporary memory of a capacity, corresponding to the size of the patch data, to the storage device and then performing patching using the allocated temporary memory if the available space of the memory is smaller than the size of the received patch data.
  • FIG. 1 is a diagram showing the configuration of an embodiment of a patch system to which a disclosed technology may be applied;
  • FIG. 2 is a diagram showing the construction of an embodiment of the patch server of FIG. 1;
  • FIG. 3 is a reference diagram illustrating a process of generating patch data that is performed in the control unit of FIG. 2;
  • FIG. 4 is a diagram showing the construction of an embodiment of a patch client according to the disclosed technology
  • FIG. 5 is a reference diagram illustrating an embodiment of patch that is performed using memory
  • FIG. 6 is a reference diagram illustrating an embodiment of patch that is performed using temporary memory
  • FIG. 7 is a flowchart illustrating an embodiment of a patch method that is performed by the patch client of FIG. 4;
  • FIG. 8 is a flowchart illustrating another embodiment of a patch method that is performed by the patch client of FIG. 4;
  • FIG. 9 is a flowchart illustrating still another embodiment of a patch method that is performed by the patch client of FIG. 4.
  • FIG. 10 is a flowchart illustrating yet another embodiment of a patch method that is performed by the patch client of FIG. 4.
  • first, second, etc. are each used to distinguish an element from other elements, and the elements should not be limited by these terms.
  • the first element may be referred to as the second element or, similarly, the second element may be referred to as the first element without departing from the scope and technical spirit of the present invention.
  • singular expressions may include plural expressions. It should be understood that in this application, the term include(s), comprise(s) or have(has) implies the inclusion of features, numbers, steps, operations, components, parts, or combinations thereof mentioned in the specification, but does not imply the exclusion of one or more of any other features, numbers, steps, operations, components, parts, or combinations thereof.
  • steps symbols (e.g., a, b, and c ) are used for convenience of description, and the symbols do not describe the sequence of the steps.
  • the steps may be performed in a sequence different from a described sequence unless a specific sequence is clearly described in the context. That is, the steps may be performed in the described sequence, may be performed substantially at the same time, and may be performed in the reverse sequence.
  • an original file refers to a file before applying a patch
  • a patch file is a file on which patch has been performed.
  • Patch data is data necessary to perform patching, and may include a patch file or information about the differences between a patch file and an original file depending on the embodiment.
  • FIG. 1 is a diagram showing the configuration of an embodiment of a patch system to which a disclosed technology may be applied.
  • the patch system may include a patch server 100 and a patch client 200.
  • the patch server 100 may generate patch data, and provide the patch data to the patch client 200 when the patch client 200 accesses the patch server 100.
  • the patch server 100 refers to a server function that performs patching while operating in conjunction with the patch client 200, and is not limited to a specific implementation.
  • the patch server 100 may be implemented as a single server or a server farm, or may be implemented as one function of a general purpose server. The detailed construction or function of the patch server 100 will be described later with reference to FIG. 2.
  • the patch client 200 may access the patch server 100 over a network, and perform patching on a terminal on which the patch client 200 is being run by performing a specific patch process, which will be described later.
  • the patch client 200 refers to a terminal function that performs patching while operating in conjunction with the patch server 100, and is not limited to a specific implementation.
  • the patch client 200 may be implemented as software that is executed in a terminal, or as logic-designed hardware. The detailed construction or function of the patch client will be described later with reference to FIGS. 4 to 6.
  • the patch server 100 and the patch client 200 may be connected over a network.
  • the network is not limited to a specific standard-based network.
  • the patch client 200 may be connected to the patch server 100 over a wired or wireless network or a combination of wired and wireless networks.
  • a wired or Wi-Fi communication network may be used as the network.
  • the patch client 200 is a smart phone or a tablet PC, the network may be configured via a mobile communication network 3G or 4G.
  • FIG. 2 is a diagram showing the construction of an embodiment of the patch server 100 of FIG. 1.
  • the patch server 100 may include a communication unit 110, memory 120, a hash generation unit 130, a patch data storage unit 140, and a control unit 150.
  • the patch server 100 may further include a client management unit 160.
  • the communication unit 110 may establish or maintain a communication line with the patch client 200 in response to a request from the control unit 150.
  • the memory 120 may include a storage space necessary to generate patch data.
  • the memory 120 may be partitioned into specific spaces for storing different data under the control of the control unit 150.
  • the hash generation unit 130 may generate a hash value for received data.
  • the hash generation unit 130 may generate a hash value for original data or patch data under the control of the control unit 150, and may provide the generated hash value to the control unit 150.
  • the patch data storage unit 140 may store patch data.
  • the patch data is data necessary to perform patching, and may include a patch file or information about the differences between a patch file and an original file depending on the embodiment.
  • the patch data storage unit 140 may store patch data provided by the control unit 150 or information about patch data (e.g., information about the version of a patch, the total size of patch data, and the size of each file in the case of patch data including a plurality of files).
  • the control unit 150 may generally control the elements of the patch server 100, thereby causing the patch server 100 to perform patching.
  • the control unit 150 will be described in more detail below with respect to each function thereof.
  • the control unit 150 may compare an original file with a patch file, and generate patch data.
  • control unit 150 may load a patch file and an original file onto the memory 120, and request the hash generation unit 130 to compare the original file with the patch file and to generate a hash value for the differences therebetween.
  • the control unit 150 may generate a patch table, including the generated hash value and an index associated with the generated hash value, generate patch data including the patch table and the patch file, and store the generated patch data in the patch data storage unit 140.
  • control unit 150 may generate patch data using a hash value for the patch file. That is, the control unit 150 may generate a patch table, including a hash value for the entire patch file and an index for the hash value, and generate patch data including the patch table and the patch file.
  • the control unit 150 may provide the patch data to the patch client 200.
  • control unit 150 may control the communication unit 110 so that a communication line with the patch client 200 is maintained, determine whether patching is necessary based on information about patch data stored in the patch data storage unit 140, and then provide the patch data to the patch client 200.
  • control unit 150 may request information about the patch client 200 connected to the client management unit 160, determine patch data based on the information received in response to the request, and then provide patch data to the patch client 200.
  • the client management unit 160 may manage the patch client 200. In an embodiment, the client management unit 160 may perform authentication on the patch client 200. In an embodiment, the client management unit 160 may manage information about patching performed on the patch client 200 (e.g., patch time and information about the version of a patch).
  • FIG. 3 is a reference diagram schematically illustrating an example of a process of generating a patch table that is performed by the control unit 150 of FIG. 2.
  • an original file and a patch file are separately loaded into the memory 120.
  • the control unit 150 may do comparison more rapidly using the memory 120 because the original and patch files have been loaded onto the memory 120.
  • the 4-digit binary numbers of the original and patch files refer to addresses in the memory 120.
  • the 2-digit binary numbers of the content of the original and patch files refer to the indices of corresponding data.
  • each of the original and patch files include four pieces of data. The four pieces of data may be four different files, or part of at least one file.
  • the control unit 150 may compare the original file with the patch file. That is, the control unit 150 may compare the content of each of the pieces of data of the original file loaded into the memory 120 with the content of each of the pieces of data of the patch file loaded into the memory 120. As shown in the drawing, the control unit 150 may determine that the content of index 01 and the content of index 11 are different from each other, and configure a patch table 300.
  • the patch table 300 may include indices 310, hash values 320, and patch file addresses 330.
  • the hash values 320 may refer to the hash values of the patch file corresponding to the indices 01 and 11, and the patch file addresses 330 may refer to addresses where the content of the patch file is actually stored.
  • the patch file addresses 330 may refer to the addresses or locations of the patch data storage unit 140 where the patch file is stored.
  • FIG. 3 illustrates an example in which an original file is compared with a patch file and patch data is generated based on the content of the patch file corresponding to differences and hash values for the content.
  • the patch client 200 may overwrite the patch file on the original file, calculate hash values, compare the calculated hash values with the hash values of patch data, and then perform patching.
  • FIG. 4 is a diagram showing the construction of an embodiment of the patch client 200 according to the disclosed technology
  • FIGS. 5 and 6 are reference diagrams illustrating embodiments of patch schemes that are performed by the patch client 200.
  • the patch client 200 may include a communication unit 210, memory 220, temporary memory 230, and a control unit 240.
  • the patch client 200 may further include at least one of a patch data storage unit 250 and a hash generation unit 260.
  • the communication unit 210 may establish or maintain a communication line with the patch server 100 at the request of the control unit 240.
  • the memory 220 is a memory device for providing a storage space.
  • the temporary memory 230 is allocated using a storage device (e.g., a HDD, a RAID, or an SSD).
  • the temporary memory 230 performs the same function as the memory 220 in that it provides a storage space, but the performance thereof is lower than the memory 220 because I/O processing occurs during the loading of data.
  • the control unit 240 controls the elements of the patch client 200 so that patching is performed on a terminal.
  • the control unit 240 may perform patching using the memory 220.
  • control unit 240 may perform patching by partitioning the available space of the memory 220 into at least two regions and loading an original file and patch data onto the respective regions. For example, the control unit 240 may perform patching by allocating three regions to the memory 220 and loading (or generating) an original file, a patch file, and patch data onto (or in) the respective regions.
  • FIG. 5 is a reference diagram illustrating this embodiment. Referring to FIG. 5, the control unit 240 may allocate three regions to the memory 220, and use the three regions as a region 510 for an original file, a region 520 for a patch file, and a region 530 for patch data.
  • the control unit 240 may load an original file to be patched onto the original file region 510, load patch data received from the patch server 100 onto the patch data region 530, and then generate a patch file in the patch file region 520.
  • the control unit 240 may analyze the patch table of the patch data region 530, determine that content corresponding to indices 01 and 11 is targeted for patching, determine that the corresponding patch content is stored at addresses 11001 and 11010, and generate a patch file in the patch file region 520 based on the data loaded onto the original file region 510 and the data loaded onto the patch data region 530. That is, the control unit 240 may read data, corresponding to indices not included in the patch table, from the original file, and generate the patch file based on the read data.
  • the control unit 240 may search patch data for data corresponding to indices included in the patch table, and generate the patch file by writing the found patch data into the patch file.
  • control unit 240 may generate the original file, the patch file, and the patch data, and load them onto the memory 220 using different methods.
  • the original file may be loaded into the memory 220 using a memory pool method
  • the patch file may be loaded onto the memory 220 using a buffer-overlapped pool method
  • the patch data may be loaded onto the memory 220 using a memory file-mapped method.
  • the reason for this is that if data is loaded into the memory 220 using the same method, the other loading processes may be affected by an error that occurs in a specific loading process.
  • control unit 240 may vary an offset in different ways when a process of reading the original file, the patch file and the patch data from the memory 220 or a process of writing the original file, the patch file and the patch data onto the memory 220 is performed according to the embodiment disclosed in FIG. 5.
  • the control unit 240 may use a random offset shift method.
  • the control unit 240 may use a sequential offset increase method.
  • the control unit 240 may use a sequential offset increase method. If an offset varies in different ways as described above, the other processes are not affected by an error that occurs in a specific process and the operating efficiency of a processor may be increased.
  • control unit 240 may perform patching using the temporary memory 230.
  • FIG. 6 is a reference diagram illustrating an embodiment of the patch using the temporary memory 230. The embodiment shown in FIG. 6 corresponds to an example of patching that varies the order of the original file data.
  • the control unit 240 may previously measure the size of part of an original file to be changed by comparing the original file with patch data, and allocate the space in the temporary memory 230 corresponding to the measured size. Referring to the example shown in FIG.
  • the control unit 240 determines that three blocks of the original file need to be changed as a result of analyzing the patch table of the patch data, requests a space of a size, corresponding to the size of the three blocks, from the temporary memory 230, and allocates the space of the requested size to the temporary memory 230.
  • the control unit 240 may change the data of the original file using the allocated temporary memory 230, and generate a patch file based on the changed data.
  • the control unit 240 may load the changed data onto the temporary memory 230 using the output buffer memory pool method.
  • the output buffer memory pool method is easy and fast in terms of recovery from a patch error because information about failed patching is automatically recorded on a file when the patching fails.
  • control unit 240 may determine the size of data to be patched, and perform patching using at least one of the memory 220 and the temporary memory 230. That is, the control unit 240 may perform patching using a combination of the embodiments shown in FIGS. 5 and 6. More particularly, when patch data is received, the control unit 240 may check the size of data to be patched and check the currently available size of the memory 220. If the size of the data is equal to or smaller than the currently available size of the memory 220, the control unit 240 may perform patching using the memory 220. In contrast, if the size of the data is larger than the currently available size of the memory 220, the control unit 240 may perform patching using the temporary memory 230. This embodiment will be described in more detail below with reference to FIG. 9.
  • the patch data storage unit 250 may temporarily store patch data received from the patch server 100. For example, if patch data is downloaded from the patch server 100, the control unit 240 may store the patch data in the patch data storage unit 250, and perform patching using the memory 220 and the temporary memory 230 based on the stored patch data.
  • the patch data storage unit 250 may store information about patching that was performed. For example, after performing patching, the control unit 240 may generate information about the patching (e.g., information about the version of a patch, a patch date, and a patch capacity) and provide the generated information to the patch data storage unit 250. The patch data storage unit 250 may store the generated information.
  • information about the patching e.g., information about the version of a patch, a patch date, and a patch capacity
  • the patch data storage unit 250 may store the generated information.
  • the hash generation unit 260 may generate hash values for the received data.
  • the control unit 240 may control the hash generation unit 260 so that the hash generation unit 260 generates hash values for patched data in order to check whether patching has been correctly performed after the patching has been completed.
  • the control unit 240 may compare the hash values, generated by the hash generation unit 260, with hash values included in a patch table, and determine whether the patching has been correctly performed.
  • FIG. 7 is a flowchart illustrating an embodiment of a patch method that is performed by the patch client 200 of FIG. 4.
  • the embodiment of the patch method disclosed in FIG. 7 relates to an embodiment of patching that is performed using the memory 220.
  • the control unit 240 may receive data from the patch server 100 at step S710, and allocate the memory 220 for patching at step S720.
  • the control unit 240 may partition the memory 220 into three regions for an original file, patch data and a patch file, and then allocate the memory 220.
  • the control unit 240 may load the original file and the patch data onto the regions of the memory 220 at step S730.
  • the control unit 240 may load the original file and the patch data into the memory 220 using different methods. For example, if the original file has been loaded into the memory 220 using the memory pool method, the patch data may be loaded into the memory 220 using the file-mapped memory method. If data is loaded using different methods as described above, the entire loading process does not need to be performed and the propagation of error can be prevented even when an error occurs in a specific loading method.
  • the control unit 240 checks whether the loading has been successfully performed at step S740. If an error has occurred in specific data (No at step S740), the control unit 240 may perform a process of loading only corresponding data. Although in the flowchart of FIG. 7, the memory 220 is illustrated as being reallocated at step S720 and erroneous data is illustrated as being reloaded at step S730, it may be possible only to reload the data without allocating the memory 220, in an embodiment.
  • the control unit 240 may generate a patch file based on the original file and the patch data loaded onto the memory 220 at step S750. For example, the control unit 240 may analyze the patch table of the patch data, divide the original file into one or more parts whose content will be moved without change and one or more parts on which patching will be performed based on the patch data, and generate the patch file in the region allocated to the patch file based on the division. Here, the control unit 240 may generate the patch file in pieces in succession or all at once.
  • control unit 240 may sequentially read corresponding content from the original file or the patch data in a range from a memory unit allocated to the start point of the patch file to a memory unit allocated to the end point of the patch file, and read the content, thereby generating a patch file.
  • control unit 240 may check the original file for one or more parts whose content will be moved without change, read the parts from the original file, write the parts that were read into at least part of the patch file at once, and write one or more parts to which the content of the patch file will be moved onto at least part of the patch file at once.
  • the control unit 240 may check the resulting patch file for an error at step S760.
  • the control unit 240 may check the generated patch file for errors using the patch table included in the patch data. For example, assuming that hash values for the patch data are included in the patch table, the control unit 240 may run an error check by calculating hash values for the patched part of the patch file and comparing the hash values of the patched part with the hash values of the patch table.
  • control unit 240 terminates the patch process. If there is an error (Yes at step S770), the control unit 240 may perform an error handling process at step S780. For example, if data allocated to the memory 220 as described above has been lost, the control unit 240 may repeat a series of steps S720 to S760 of allocating the memory 220 again and loading data again in order to patch parts having different hash values again. As another example, if the original file and the patch data allocated to the memory 220 remain in memory, the control unit 240 may perform the series of steps S750 and S760 of rewriting only a part associated with an error based on the original file and the patch data and performing error processing.
  • the control unit 240 may partition the memory 220 into a region for an original file and a region for patch data, read content, corresponding to one or more parts that need to be changed in the original file, from the patch data, and change the original file by overwriting the read content, thereby generating a patch file.
  • FIG. 8 is a flowchart illustrating another embodiment of a patch method that is performed by the patch client of FIG. 4.
  • the embodiment of the patch method disclosed in FIG. 8 relates to an embodiment of the patch that is performed using the temporary memory 230.
  • the control unit 240 may read data from the patch server 100 at step S810, and measure the size of data to be patched (i.e., data that needs to be changed in an original file) at step S820. In an embodiment, the control unit 240 may measure the size of the data based on the patch table of patch data.
  • the control unit 240 may allocate the temporary memory 230 of a capacity corresponding to the size of the data to be changed at step S830, and load the data to be changed into the allocated temporary memory at step S840.
  • the control unit 230 may allocate the temporary memory 230, and load the data to be changed onto the allocated temporary memory using the output buffer memory pool method.
  • the output buffer memory pool method when patching fails, information about the failed patch is automatically recorded in a file. Accordingly, performing a series of error processing steps S870 to S890 may be simple.
  • the control unit 240 checks whether the loading was successful at step S850. If an error has occurred (No at step S850), the control unit 240 may perform the step of loading the data again. Although, in the flowchart of FIG. 8, the temporary memory 230 is illustrated as being reallocated at step S830 and the data to be changed is illustrated as being reloaded at step S840, only the data to be changed may be reloaded without allocating the temporary memory 230 in an embodiment.
  • the control unit 240 may generate a patch file based on the data loaded into the temporary memory 230 at step S860, and runs an error check on the generated patch file at step S870. If an error has occurred (Yes at step S880), the control unit 240 may perform an error handling process at step S890.
  • FIG. 9 is a flowchart illustrating still another embodiment of a patch method that is performed by the patch client 200 of FIG. 4.
  • the embodiment shown in FIG. 9 is an embodiment of a patch method that is performed based on a combination of the embodiments of FIGS. 7 and 8.
  • control unit 240 may receive patch data from the patch server 100 and check the patch data at step S910. For example, the control unit 240 may receive patch data and then check the overall size of the patch data.
  • control unit 240 may calculate the currently available space of the memory 220 at step S920. Thereafter, the control unit 240 compares the available space of the memory 220 with the size of the patch data at step S930. If the available space of the memory 220 is enough, patching is performed using the memory 220. In contrast, if the available space of the memory 220 is not enough, patching is performed using the temporary memory 230.
  • control unit 240 may take into consideration the memory space to be returned when calculating the available space of the memory 220. This may be represented by the following Equation 1:
  • Memory_total Memory_enable + (Memory_return * t) (1)
  • an available memory space Memory_total that may be used when patching may be the sum of the currently available memory space Memory_enable and a memory space to be returned within a specific time Memory_return * t.
  • the memory space to be returned within the specific time may be represented by multiplying a memory space expected to be returned within the specific time Memory_return by a specific probability value t.
  • the control unit 240 may set a memory space expected to be returned within 1 minute as the memory space expected to be returned Memory_return if the memory space expected to be returned is determined.
  • the probability value t may be proportional to the expected return time. For example, the probability value t of a memory space Memory_return A that is expected to be returned within 10 seconds may be higher than that of a memory space Memory_return B that is expected to be returned within 40 seconds.
  • the control unit 240 may perform patching using the memory 220 at steps S940 to S970.
  • the steps of performing patching using the memory 220 have been described above with reference to FIG. 7, a detailed description thereof is omitted.
  • the control unit 240 may perform patching using the temporary memory 230 at steps S941 to S971.
  • the steps of performing patching using the temporary memory 230 were described above with reference to FIG. 8, a detailed description thereof is omitted.
  • the embodiment shown in FIG. 9 is performed based on a combination of the patch method of FIG. 7 using the memory 220 and the patch method of FIG. 8 using the temporary memory 230.
  • the embodiment shown in FIG. 9 indicates that the patch method using the memory 220 is first performed. That is, if all patch data may be allocated to the memory 220, patching is rapidly performed using the memory 220. If the capacity of the memory 220 is smaller than the total size of patch data, patching is performed using the temporary memory 230. Accordingly, a large amount of patch data may be patched more rapidly.
  • FIG. 10 is a flowchart illustrating yet another embodiment of a patch method that is performed in the patch client 200 of FIG. 4.
  • the embodiment of FIG. 10 corresponds to another embodiment of a patch method that is performed based on a combination of the embodiments of FIGS. 7 and 8, and relates to an embodiment in which patching is performed on each of a plurality of files to be patched according to an optimized method when patch data includes the plurality of files. That is, in the embodiment of FIG. 9, whether to use the memory 220 or the temporary memory 230 is determined based on the entirety of the patch data.
  • patching is performed on patch data, including a plurality of files, using any one of the memory 220 and the temporary memory 230 for each file.
  • the control unit 240 may receive patch data from the patch server 100, and check the patch data at step S1010. In an embodiment, after receiving patch data including a plurality of files to be patched, the control unit 240 may check the size of each of the plurality of files. In another embodiment, the patch data may include information about the size of the plurality of files, and the control unit 240 may check each of the plurality of files for the size thereof based on the information.
  • the control unit 240 may calculate the currently available space of the memory 220 at step S1020. In this case, the control unit 240 may compare the available space of the memory 220 with the size of each of the files included in the patch data. In contrast, the control unit 240 may patch a file, having a size equal to or smaller than the available capacity of the memory 220 using the memory 220. The control unit 240 may patch a file, having a size greater than the available capacity of the memory 220 using the temporary memory 230.
  • the calculation of the available space of the memory 220 may be performed as in FIG. 9, a detailed description thereof is omitted.
  • the control unit 240 checks whether there are some files each having a size equal to or smaller than the available space of the memory 220 at step S1030. If there are some files (Yes at step S1030), the control unit 240 may patch some files of the patch data using the memory 220 at steps S1040 to S1070. Here, the steps of patching some files of the patch data using the memory 220 may be performed for each file. If the available space of the memory 220 is greater than the size of two of the files, the control unit 240 may patch the two or more files at the same time. Since a detailed method of performing patching using the memory 220 has been described above with reference to FIG. 7, a detailed description thereof will be omitted.
  • the control unit 240 may patch the remaining files of the patch data, each having a size greater than the available space of the memory 220, using the temporary memory 230 at steps S1041 to S1071. For example, the control unit 240 may sum up the sizes of the remaining files of the patch data and allocate the temporary memory 230 corresponding to the sum. Since a detailed method of performing patching using the allocated temporary memory 230 has been described above with reference to FIG. 8, a detailed description thereof is omitted.
  • patch data including a plurality of files is patched using at least one of the memory 220 and the temporary memory 230. Accordingly, patching may be performed more rapidly.
  • the control unit 240 may perform patching using the memory 220 and the temporary memory 230 in parallel depending on its computation performance. That is, the control unit 240 may patch some files, each having a size smaller than the available space of the memory 220, using the memory 220 and, at the same time, patch the remaining files using the temporary memory 230. If patching is performed in parallel, the time it takes to perform patching can be considerably reduced.
  • patching can be performed more rapidly and efficiently by maximizing the utilization of the resources of the patch client.
  • a patch file having a high capacity can be patched rapidly and reliably.
  • patching can be performed more rapidly using an optimized patch algorithm depending on the size of data to be patched.
  • patching can be performed in a resource-efficient and error-tolerable manner by modifying only an erroneous part, rather than the entire file to be patched, if there is an error in a patch process.

Abstract

Disclosed herein is a patch method. The patch method is performed in a patch client. The patch client can be connected to a patch server, and includes a storage device and memory. The patch method includes the steps of (a) accessing the patch server and receiving patch data from the patch server; (b) calculating an available space of the memory; (c) if a size of the patch data is smaller than or equal to the available space of the memory, performing patching using the available space of the memory; and (d) if the size of the patch data is greater than the available space of the memory, allocating temporary memory of a capacity, corresponding to the size of the patch data, to the storage device, and performing patching using the allocated temporary memory.

Description

PATCH METHOD USING MEMORY AND TEMPORARY MEMORY AND PATCH SERVER AND CLIENT USING THE SAME
The present invention relates generally to a patch technology and, more particularly, to a patch method using memory and temporary memory which is capable of patching a large amount of data more rapidly and reliably, and a patch server and client using the patch method.
With the development of the computing environment, the performance of hardware has become high and software that runs on the hardware has become able to use many resources. Accordingly, there is a tendency for the capacity of software to gradually increase. Furthermore, with the development of a network environment, the transmission and distribution of software over a network has becoming popularized. In particular, the content of software frequently changes or expands for a variety of reasons, such as the release of new versions of software. The changed or expanded content is provided to a user so that the software is updated, which is called patching or updating.
A conventional patch technique includes a patch method using information about the version of a patch. For example, there is a method of a patch client accessing a patch server, comparing a current patch version with the patch version of the patch server, and, if patching is necessary, downloading and storing corresponding content. The conventional patch method is problematic in that the resources of the patch server and the patch client are inefficiently used and a bottleneck occurs on the server because patch redundancy may occur if there is an error in information about the version of a patch or the patching is partially performed.
In order to solve this problem, a technique was developed in which information about the changed content of a file to be patched is provided and patching is performed based on the information. In this technique, however, an algorithm itself stores patch data in a single piece of memory at once or in succession and then performs patching. Accordingly, although this technique slightly reduces patch data compared to a conventional patch technique, the algorithm for allocating memory and performing patching has not been improved. For this reason, there are problems in that all data to be patched should be downloaded again if an error occurs while a file to be patched is being downloaded, a patch process is slow when a large-size file is being patched, and hardware resources are inefficiently used. Furthermore, there is a problem in that a heavy burden is imposed on a patch server because of the inefficient operation of a patch client.
Accordingly, the present invention has been made keeping in mind the above problems occurring in the prior art, and an object of the present invention is to apply a patch more rapidly and efficiently by maximizing the utilization of the resources of a patch client using an improved patch algorithm.
Another object of the present invention is to patch a patch file having a high capacity rapidly and reliably.
Still another object of the present invention is to apply a patch more rapidly using an optimized patch algorithm depending on the size of the data to be patched.
Yet another object of the present invention is to apply a patch in a resource-efficient and error-tolerable manner by modifying only an erroneous part, rather than the entire file to be patched, if an error occurs during the patch process.
In order to achieve the above objects, the present invention provides a patch method, the patch method being performed in a patch client, the patch client being connectable to a patch server and including a storage device and memory, the patch method including the steps of (a) accessing the patch server and receiving patch data from the patch server; (b) calculating an available space of the memory; (c) if a size of the patch data is smaller than or equal to the available space of the memory, performing patching using the available space of the memory; and (d) if the size of the patch data is greater than the available space of the memory, allocating temporary memory of a capacity, corresponding to the size of the patch data, to the storage device, and performing patching using the allocated temporary memory.
In order to achieve the above objects, the present invention provides a patch method, the patch method being performed in a patch client, the patch client being connectable to a patch server and including a storage device and memory, the patch method including the steps of (a) accessing the patch server and receiving patch data, including a plurality of files to be patched, from the patch server; (b) calculating an available space of the memory; (c) if at least one of the plurality of files to be patched is smaller than the available space of the memory, patching the at least one file using the available space of the memory; and (d) if at least one of the plurality of files to be patched is greater than the available space of the memory, patching the at least one file using temporary memory allocated to the storage device.
In order to achieve the above objects, the present invention provides a patch server, the patch server being connected to a patch client and providing patch data, the patch server including memory; a hash generation unit configured to generate at least one hash value for received data; and a control unit configured to load an original file and a patch file into the memory, to control the hash generation unit so that the hash generation unit compares the loaded original file with the loaded patch file and generates at least one hash value for a difference, to generate a patch table including the generated hash value, and to generate the patch data including the generated patch table.
In order to achieve the above objects, the present invention provides a patch client, the patch client being able to use memory and a storage device, and accessing a patch server, receiving patch data and performing patching, the patch client including a control unit for comparing a size of the received patch data with an available space of the memory, performing the patch using the memory if the available space of the memory is equal to or greater than the size of the received patch data, and allocating temporary memory of a capacity, corresponding to the size of the patch data, to the storage device and then performing patching using the allocated temporary memory if the available space of the memory is smaller than the size of the received patch data.
The above and other objects, features and advantages of the present invention will be more clearly understood from the following detailed description taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a diagram showing the configuration of an embodiment of a patch system to which a disclosed technology may be applied;
FIG. 2 is a diagram showing the construction of an embodiment of the patch server of FIG. 1;
FIG. 3 is a reference diagram illustrating a process of generating patch data that is performed in the control unit of FIG. 2;
FIG. 4 is a diagram showing the construction of an embodiment of a patch client according to the disclosed technology;
FIG. 5 is a reference diagram illustrating an embodiment of patch that is performed using memory;
FIG. 6 is a reference diagram illustrating an embodiment of patch that is performed using temporary memory;
FIG. 7 is a flowchart illustrating an embodiment of a patch method that is performed by the patch client of FIG. 4;
FIG. 8 is a flowchart illustrating another embodiment of a patch method that is performed by the patch client of FIG. 4;
FIG. 9 is a flowchart illustrating still another embodiment of a patch method that is performed by the patch client of FIG. 4; and
FIG. 10 is a flowchart illustrating yet another embodiment of a patch method that is performed by the patch client of FIG. 4.
A description of the disclosed technology is merely illustrative for the purpose of a structural or functional description. The scope of the disclosed technology should not be construed as being limited to the following embodiments. That is, the embodiments may be modified in various ways and the scope of the disclosed technology should be construed as covering equivalents which can also realize the technical spirit of the embodiments.
Meanwhile, the meanings of the terms described in this application should be understood, as follows.
The terms first, second, etc. are each used to distinguish an element from other elements, and the elements should not be limited by these terms. For example, the first element may be referred to as the second element or, similarly, the second element may be referred to as the first element without departing from the scope and technical spirit of the present invention.
When an element is described as being coupled to or connected to another element, it should be appreciated that the former element may be directly coupled or connected to the latter element, but they may be coupled or connected together via one or more intervening elements. In contrast, when an element is described as being directly coupled to or directly connected to another element, it should be appreciated that they are coupled or connected together without the intervention of some other element. Meanwhile, other expressions describing the relationships between elements, such as between ~ and directly between ~ or adjacent to ~ and directly adjacent to ~, should be interpreted in the same manner.
Unless otherwise defined explicitly, singular expressions may include plural expressions. It should be understood that in this application, the term include(s), comprise(s) or have(has) implies the inclusion of features, numbers, steps, operations, components, parts, or combinations thereof mentioned in the specification, but does not imply the exclusion of one or more of any other features, numbers, steps, operations, components, parts, or combinations thereof.
In steps, symbols (e.g., a, b, and c ) are used for convenience of description, and the symbols do not describe the sequence of the steps. The steps may be performed in a sequence different from a described sequence unless a specific sequence is clearly described in the context. That is, the steps may be performed in the described sequence, may be performed substantially at the same time, and may be performed in the reverse sequence.
Unless defined otherwise, all terms used herein have the same meanings as generally understood by those having ordinary knowledge in the technical field to which the present invention pertains. Terms defined in commonly used dictionaries should be interpreted as having meanings consistent with meanings construed in the context of the related art, and should not be interpreted as having ideal or excessively formal meanings unless defined explicitly in this application.
In the following description, an original file refers to a file before applying a patch, and a patch file is a file on which patch has been performed. Patch data is data necessary to perform patching, and may include a patch file or information about the differences between a patch file and an original file depending on the embodiment.
FIG. 1 is a diagram showing the configuration of an embodiment of a patch system to which a disclosed technology may be applied.
Referring to FIG. 1, the patch system may include a patch server 100 and a patch client 200.
The patch server 100 may generate patch data, and provide the patch data to the patch client 200 when the patch client 200 accesses the patch server 100. Here, the patch server 100 refers to a server function that performs patching while operating in conjunction with the patch client 200, and is not limited to a specific implementation. For example, the patch server 100 may be implemented as a single server or a server farm, or may be implemented as one function of a general purpose server. The detailed construction or function of the patch server 100 will be described later with reference to FIG. 2.
The patch client 200 may access the patch server 100 over a network, and perform patching on a terminal on which the patch client 200 is being run by performing a specific patch process, which will be described later. Here, the patch client 200 refers to a terminal function that performs patching while operating in conjunction with the patch server 100, and is not limited to a specific implementation. For example, the patch client 200 may be implemented as software that is executed in a terminal, or as logic-designed hardware. The detailed construction or function of the patch client will be described later with reference to FIGS. 4 to 6.
The patch server 100 and the patch client 200 may be connected over a network. Here, the network is not limited to a specific standard-based network. The patch client 200 may be connected to the patch server 100 over a wired or wireless network or a combination of wired and wireless networks. For example, if the patch client 200 is run on a personal computer (PC), a wired or Wi-Fi communication network may be used as the network. Alternatively, if the patch client 200 is a smart phone or a tablet PC, the network may be configured via a mobile communication network 3G or 4G.
FIG. 2 is a diagram showing the construction of an embodiment of the patch server 100 of FIG. 1.
Referring to FIG. 2, the patch server 100 may include a communication unit 110, memory 120, a hash generation unit 130, a patch data storage unit 140, and a control unit 150. In an embodiment, the patch server 100 may further include a client management unit 160.
The communication unit 110 may establish or maintain a communication line with the patch client 200 in response to a request from the control unit 150.
The memory 120 may include a storage space necessary to generate patch data. The memory 120 may be partitioned into specific spaces for storing different data under the control of the control unit 150.
The hash generation unit 130 may generate a hash value for received data. The hash generation unit 130 may generate a hash value for original data or patch data under the control of the control unit 150, and may provide the generated hash value to the control unit 150.
The patch data storage unit 140 may store patch data. As described above, the patch data is data necessary to perform patching, and may include a patch file or information about the differences between a patch file and an original file depending on the embodiment. The patch data storage unit 140 may store patch data provided by the control unit 150 or information about patch data (e.g., information about the version of a patch, the total size of patch data, and the size of each file in the case of patch data including a plurality of files).
The control unit 150 may generally control the elements of the patch server 100, thereby causing the patch server 100 to perform patching. The control unit 150 will be described in more detail below with respect to each function thereof.
The control unit 150 may compare an original file with a patch file, and generate patch data.
In an embodiment, the control unit 150 may load a patch file and an original file onto the memory 120, and request the hash generation unit 130 to compare the original file with the patch file and to generate a hash value for the differences therebetween. The control unit 150 may generate a patch table, including the generated hash value and an index associated with the generated hash value, generate patch data including the patch table and the patch file, and store the generated patch data in the patch data storage unit 140.
In another embodiment, the control unit 150 may generate patch data using a hash value for the patch file. That is, the control unit 150 may generate a patch table, including a hash value for the entire patch file and an index for the hash value, and generate patch data including the patch table and the patch file.
The control unit 150 may provide the patch data to the patch client 200.
In an embodiment, the control unit 150 may control the communication unit 110 so that a communication line with the patch client 200 is maintained, determine whether patching is necessary based on information about patch data stored in the patch data storage unit 140, and then provide the patch data to the patch client 200.
In another embodiment, when the communication unit 110 establishes a communication line with the patch client 200 under the control of the control unit 150, the control unit 150 may request information about the patch client 200 connected to the client management unit 160, determine patch data based on the information received in response to the request, and then provide patch data to the patch client 200.
The client management unit 160 may manage the patch client 200. In an embodiment, the client management unit 160 may perform authentication on the patch client 200. In an embodiment, the client management unit 160 may manage information about patching performed on the patch client 200 (e.g., patch time and information about the version of a patch).
FIG. 3 is a reference diagram schematically illustrating an example of a process of generating a patch table that is performed by the control unit 150 of FIG. 2.
Referring to FIG. 3, an original file and a patch file are separately loaded into the memory 120. The control unit 150 may do comparison more rapidly using the memory 120 because the original and patch files have been loaded onto the memory 120. The 4-digit binary numbers of the original and patch files refer to addresses in the memory 120. The 2-digit binary numbers of the content of the original and patch files refer to the indices of corresponding data. In the illustrated example, each of the original and patch files include four pieces of data. The four pieces of data may be four different files, or part of at least one file.
The control unit 150 may compare the original file with the patch file. That is, the control unit 150 may compare the content of each of the pieces of data of the original file loaded into the memory 120 with the content of each of the pieces of data of the patch file loaded into the memory 120. As shown in the drawing, the control unit 150 may determine that the content of index 01 and the content of index 11 are different from each other, and configure a patch table 300. The patch table 300 may include indices 310, hash values 320, and patch file addresses 330. The hash values 320 may refer to the hash values of the patch file corresponding to the indices 01 and 11, and the patch file addresses 330 may refer to addresses where the content of the patch file is actually stored. For example, the patch file addresses 330 may refer to the addresses or locations of the patch data storage unit 140 where the patch file is stored.
The embodiment shown in FIG. 3 illustrates an example in which an original file is compared with a patch file and patch data is generated based on the content of the patch file corresponding to differences and hash values for the content. In this example, the patch client 200 may overwrite the patch file on the original file, calculate hash values, compare the calculated hash values with the hash values of patch data, and then perform patching.
FIG. 4 is a diagram showing the construction of an embodiment of the patch client 200 according to the disclosed technology, and FIGS. 5 and 6 are reference diagrams illustrating embodiments of patch schemes that are performed by the patch client 200.
Referring to FIG. 4, the patch client 200 may include a communication unit 210, memory 220, temporary memory 230, and a control unit 240. In an embodiment, the patch client 200 may further include at least one of a patch data storage unit 250 and a hash generation unit 260.
The communication unit 210 may establish or maintain a communication line with the patch server 100 at the request of the control unit 240.
The memory 220 is a memory device for providing a storage space.
The temporary memory 230 is allocated using a storage device (e.g., a HDD, a RAID, or an SSD). The temporary memory 230 performs the same function as the memory 220 in that it provides a storage space, but the performance thereof is lower than the memory 220 because I/O processing occurs during the loading of data.
The control unit 240 controls the elements of the patch client 200 so that patching is performed on a terminal.
The control unit 240 may perform patching using the memory 220.
In an embodiment, the control unit 240 may perform patching by partitioning the available space of the memory 220 into at least two regions and loading an original file and patch data onto the respective regions. For example, the control unit 240 may perform patching by allocating three regions to the memory 220 and loading (or generating) an original file, a patch file, and patch data onto (or in) the respective regions. FIG. 5 is a reference diagram illustrating this embodiment. Referring to FIG. 5, the control unit 240 may allocate three regions to the memory 220, and use the three regions as a region 510 for an original file, a region 520 for a patch file, and a region 530 for patch data. The control unit 240 may load an original file to be patched onto the original file region 510, load patch data received from the patch server 100 onto the patch data region 530, and then generate a patch file in the patch file region 520. For example, the control unit 240 may analyze the patch table of the patch data region 530, determine that content corresponding to indices 01 and 11 is targeted for patching, determine that the corresponding patch content is stored at addresses 11001 and 11010, and generate a patch file in the patch file region 520 based on the data loaded onto the original file region 510 and the data loaded onto the patch data region 530. That is, the control unit 240 may read data, corresponding to indices not included in the patch table, from the original file, and generate the patch file based on the read data. The control unit 240 may search patch data for data corresponding to indices included in the patch table, and generate the patch file by writing the found patch data into the patch file.
Here, the control unit 240 may generate the original file, the patch file, and the patch data, and load them onto the memory 220 using different methods. For example, the original file may be loaded into the memory 220 using a memory pool method, the patch file may be loaded onto the memory 220 using a buffer-overlapped pool method, and the patch data may be loaded onto the memory 220 using a memory file-mapped method. The reason for this is that if data is loaded into the memory 220 using the same method, the other loading processes may be affected by an error that occurs in a specific loading process.
Furthermore, the control unit 240 may vary an offset in different ways when a process of reading the original file, the patch file and the patch data from the memory 220 or a process of writing the original file, the patch file and the patch data onto the memory 220 is performed according to the embodiment disclosed in FIG. 5. For example, when reading the original file, the control unit 240 may use a random offset shift method. When writing the patch file, the control unit 240 may use a sequential offset increase method. Furthermore, when reading the patch data, the control unit 240 may use a sequential offset increase method. If an offset varies in different ways as described above, the other processes are not affected by an error that occurs in a specific process and the operating efficiency of a processor may be increased.
In another embodiment, the control unit 240 may perform patching using the temporary memory 230. FIG. 6 is a reference diagram illustrating an embodiment of the patch using the temporary memory 230. The embodiment shown in FIG. 6 corresponds to an example of patching that varies the order of the original file data. Referring to FIG. 6, the control unit 240 may previously measure the size of part of an original file to be changed by comparing the original file with patch data, and allocate the space in the temporary memory 230 corresponding to the measured size. Referring to the example shown in FIG. 6, the control unit 240 determines that three blocks of the original file need to be changed as a result of analyzing the patch table of the patch data, requests a space of a size, corresponding to the size of the three blocks, from the temporary memory 230, and allocates the space of the requested size to the temporary memory 230. The control unit 240 may change the data of the original file using the allocated temporary memory 230, and generate a patch file based on the changed data. Here, the control unit 240 may load the changed data onto the temporary memory 230 using the output buffer memory pool method. The output buffer memory pool method is easy and fast in terms of recovery from a patch error because information about failed patching is automatically recorded on a file when the patching fails. The patch method using the temporary memory 230 shown in FIG. 6 is efficient when there is a large amount of data to be patched. That is, in the patch method described with reference to FIG. 5, loading may be performed more rapidly, and thus the entire patch process becomes fast because the memory 220 is used. However, the patch method of FIG. 5 may not be used when the amount of data to be patched exceeds the capacity of the memory 220 because the actual capacity of the memory 220 is limited. In contrast, in the patch method shown in FIG. 6, a data file having a large size (e.g., a patch file having a large size equal to or larger than 1 Gigabytes) can be easily patched because the temporary memory 230 is allocated to a storage device and then patching is performed.
In yet another embodiment, the control unit 240 may determine the size of data to be patched, and perform patching using at least one of the memory 220 and the temporary memory 230. That is, the control unit 240 may perform patching using a combination of the embodiments shown in FIGS. 5 and 6. More particularly, when patch data is received, the control unit 240 may check the size of data to be patched and check the currently available size of the memory 220. If the size of the data is equal to or smaller than the currently available size of the memory 220, the control unit 240 may perform patching using the memory 220. In contrast, if the size of the data is larger than the currently available size of the memory 220, the control unit 240 may perform patching using the temporary memory 230. This embodiment will be described in more detail below with reference to FIG. 9.
The patch data storage unit 250 may temporarily store patch data received from the patch server 100. For example, if patch data is downloaded from the patch server 100, the control unit 240 may store the patch data in the patch data storage unit 250, and perform patching using the memory 220 and the temporary memory 230 based on the stored patch data.
In an embodiment, the patch data storage unit 250 may store information about patching that was performed. For example, after performing patching, the control unit 240 may generate information about the patching (e.g., information about the version of a patch, a patch date, and a patch capacity) and provide the generated information to the patch data storage unit 250. The patch data storage unit 250 may store the generated information.
The hash generation unit 260 may generate hash values for the received data. For example, the control unit 240 may control the hash generation unit 260 so that the hash generation unit 260 generates hash values for patched data in order to check whether patching has been correctly performed after the patching has been completed. The control unit 240 may compare the hash values, generated by the hash generation unit 260, with hash values included in a patch table, and determine whether the patching has been correctly performed.
FIG. 7 is a flowchart illustrating an embodiment of a patch method that is performed by the patch client 200 of FIG. 4. The embodiment of the patch method disclosed in FIG. 7 relates to an embodiment of patching that is performed using the memory 220.
The control unit 240 may receive data from the patch server 100 at step S710, and allocate the memory 220 for patching at step S720. Here, the control unit 240 may partition the memory 220 into three regions for an original file, patch data and a patch file, and then allocate the memory 220.
The control unit 240 may load the original file and the patch data onto the regions of the memory 220 at step S730. Here, the control unit 240 may load the original file and the patch data into the memory 220 using different methods. For example, if the original file has been loaded into the memory 220 using the memory pool method, the patch data may be loaded into the memory 220 using the file-mapped memory method. If data is loaded using different methods as described above, the entire loading process does not need to be performed and the propagation of error can be prevented even when an error occurs in a specific loading method.
The control unit 240 checks whether the loading has been successfully performed at step S740. If an error has occurred in specific data (No at step S740), the control unit 240 may perform a process of loading only corresponding data. Although in the flowchart of FIG. 7, the memory 220 is illustrated as being reallocated at step S720 and erroneous data is illustrated as being reloaded at step S730, it may be possible only to reload the data without allocating the memory 220, in an embodiment.
If the loading has been successfully performed, the control unit 240 may generate a patch file based on the original file and the patch data loaded onto the memory 220 at step S750. For example, the control unit 240 may analyze the patch table of the patch data, divide the original file into one or more parts whose content will be moved without change and one or more parts on which patching will be performed based on the patch data, and generate the patch file in the region allocated to the patch file based on the division. Here, the control unit 240 may generate the patch file in pieces in succession or all at once. That is, the control unit 240 may sequentially read corresponding content from the original file or the patch data in a range from a memory unit allocated to the start point of the patch file to a memory unit allocated to the end point of the patch file, and read the content, thereby generating a patch file. In an embodiment, the control unit 240 may check the original file for one or more parts whose content will be moved without change, read the parts from the original file, write the parts that were read into at least part of the patch file at once, and write one or more parts to which the content of the patch file will be moved onto at least part of the patch file at once.
The control unit 240 may check the resulting patch file for an error at step S760. Here, the control unit 240 may check the generated patch file for errors using the patch table included in the patch data. For example, assuming that hash values for the patch data are included in the patch table, the control unit 240 may run an error check by calculating hash values for the patched part of the patch file and comparing the hash values of the patched part with the hash values of the patch table.
If there is no error (No at step S770), the control unit 240 terminates the patch process. If there is an error (Yes at step S770), the control unit 240 may perform an error handling process at step S780. For example, if data allocated to the memory 220 as described above has been lost, the control unit 240 may repeat a series of steps S720 to S760 of allocating the memory 220 again and loading data again in order to patch parts having different hash values again. As another example, if the original file and the patch data allocated to the memory 220 remain in memory, the control unit 240 may perform the series of steps S750 and S760 of rewriting only a part associated with an error based on the original file and the patch data and performing error processing.
Although in the above description, the example in which the memory 220 is partitioned into the three regions and patching is performed has been described with reference to FIG. 7, another example is possible in which the memory 220 may be partitioned into two regions and patching may be performed. That is, the control unit 240 may partition the memory 220 into a region for an original file and a region for patch data, read content, corresponding to one or more parts that need to be changed in the original file, from the patch data, and change the original file by overwriting the read content, thereby generating a patch file.
FIG. 8 is a flowchart illustrating another embodiment of a patch method that is performed by the patch client of FIG. 4. The embodiment of the patch method disclosed in FIG. 8 relates to an embodiment of the patch that is performed using the temporary memory 230.
The control unit 240 may read data from the patch server 100 at step S810, and measure the size of data to be patched (i.e., data that needs to be changed in an original file) at step S820. In an embodiment, the control unit 240 may measure the size of the data based on the patch table of patch data.
The control unit 240 may allocate the temporary memory 230 of a capacity corresponding to the size of the data to be changed at step S830, and load the data to be changed into the allocated temporary memory at step S840. In an embodiment, the control unit 230 may allocate the temporary memory 230, and load the data to be changed onto the allocated temporary memory using the output buffer memory pool method. In the output buffer memory pool method, when patching fails, information about the failed patch is automatically recorded in a file. Accordingly, performing a series of error processing steps S870 to S890 may be simple.
The control unit 240 checks whether the loading was successful at step S850. If an error has occurred (No at step S850), the control unit 240 may perform the step of loading the data again. Although, in the flowchart of FIG. 8, the temporary memory 230 is illustrated as being reallocated at step S830 and the data to be changed is illustrated as being reloaded at step S840, only the data to be changed may be reloaded without allocating the temporary memory 230 in an embodiment.
The control unit 240 may generate a patch file based on the data loaded into the temporary memory 230 at step S860, and runs an error check on the generated patch file at step S870. If an error has occurred (Yes at step S880), the control unit 240 may perform an error handling process at step S890.
FIG. 9 is a flowchart illustrating still another embodiment of a patch method that is performed by the patch client 200 of FIG. 4. The embodiment shown in FIG. 9 is an embodiment of a patch method that is performed based on a combination of the embodiments of FIGS. 7 and 8.
Referring to FIG. 9, the control unit 240 may receive patch data from the patch server 100 and check the patch data at step S910. For example, the control unit 240 may receive patch data and then check the overall size of the patch data.
After checking the patch data, the control unit 240 may calculate the currently available space of the memory 220 at step S920. Thereafter, the control unit 240 compares the available space of the memory 220 with the size of the patch data at step S930. If the available space of the memory 220 is enough, patching is performed using the memory 220. In contrast, if the available space of the memory 220 is not enough, patching is performed using the temporary memory 230.
In an embodiment, the control unit 240 may take into consideration the memory space to be returned when calculating the available space of the memory 220. This may be represented by the following Equation 1:
Memory_total = Memory_enable + (Memory_return * t) (1)
Referring to Equation 1, an available memory space Memory_total that may be used when patching may be the sum of the currently available memory space Memory_enable and a memory space to be returned within a specific time Memory_return * t. Here, the memory space to be returned within the specific time may be represented by multiplying a memory space expected to be returned within the specific time Memory_return by a specific probability value t. For example, assuming that the time it is expected to take to apply the entire patch is about 1 minute after patch data has been received, the control unit 240 may set a memory space expected to be returned within 1 minute as the memory space expected to be returned Memory_return if the memory space expected to be returned is determined. Here, the probability value t may be proportional to the expected return time. For example, the probability value t of a memory space Memory_return A that is expected to be returned within 10 seconds may be higher than that of a memory space Memory_return B that is expected to be returned within 40 seconds.
If the available memory space is equal to or greater than the size of the patch data (Yes at step S930), the control unit 240 may perform patching using the memory 220 at steps S940 to S970. Here, since the steps of performing patching using the memory 220 have been described above with reference to FIG. 7, a detailed description thereof is omitted. If the available memory space is smaller than the size of the patch data (No at step S930), the control unit 240 may perform patching using the temporary memory 230 at steps S941 to S971. Here, since the steps of performing patching using the temporary memory 230 were described above with reference to FIG. 8, a detailed description thereof is omitted.
Although an error handling process has been omitted from the flowchart of FIG. 9, the error handling process may be performed as in the above-described embodiments.
The embodiment shown in FIG. 9 is performed based on a combination of the patch method of FIG. 7 using the memory 220 and the patch method of FIG. 8 using the temporary memory 230. The embodiment shown in FIG. 9 indicates that the patch method using the memory 220 is first performed. That is, if all patch data may be allocated to the memory 220, patching is rapidly performed using the memory 220. If the capacity of the memory 220 is smaller than the total size of patch data, patching is performed using the temporary memory 230. Accordingly, a large amount of patch data may be patched more rapidly.
FIG. 10 is a flowchart illustrating yet another embodiment of a patch method that is performed in the patch client 200 of FIG. 4. The embodiment of FIG. 10 corresponds to another embodiment of a patch method that is performed based on a combination of the embodiments of FIGS. 7 and 8, and relates to an embodiment in which patching is performed on each of a plurality of files to be patched according to an optimized method when patch data includes the plurality of files. That is, in the embodiment of FIG. 9, whether to use the memory 220 or the temporary memory 230 is determined based on the entirety of the patch data. In the embodiment of FIG. 10, patching is performed on patch data, including a plurality of files, using any one of the memory 220 and the temporary memory 230 for each file.
Referring to FIG. 10, the control unit 240 may receive patch data from the patch server 100, and check the patch data at step S1010. In an embodiment, after receiving patch data including a plurality of files to be patched, the control unit 240 may check the size of each of the plurality of files. In another embodiment, the patch data may include information about the size of the plurality of files, and the control unit 240 may check each of the plurality of files for the size thereof based on the information.
The control unit 240 may calculate the currently available space of the memory 220 at step S1020. In this case, the control unit 240 may compare the available space of the memory 220 with the size of each of the files included in the patch data. In contrast, the control unit 240 may patch a file, having a size equal to or smaller than the available capacity of the memory 220 using the memory 220. The control unit 240 may patch a file, having a size greater than the available capacity of the memory 220 using the temporary memory 230. Here, since the calculation of the available space of the memory 220 may be performed as in FIG. 9, a detailed description thereof is omitted.
The control unit 240 checks whether there are some files each having a size equal to or smaller than the available space of the memory 220 at step S1030. If there are some files (Yes at step S1030), the control unit 240 may patch some files of the patch data using the memory 220 at steps S1040 to S1070. Here, the steps of patching some files of the patch data using the memory 220 may be performed for each file. If the available space of the memory 220 is greater than the size of two of the files, the control unit 240 may patch the two or more files at the same time. Since a detailed method of performing patching using the memory 220 has been described above with reference to FIG. 7, a detailed description thereof will be omitted.
The control unit 240 may patch the remaining files of the patch data, each having a size greater than the available space of the memory 220, using the temporary memory 230 at steps S1041 to S1071. For example, the control unit 240 may sum up the sizes of the remaining files of the patch data and allocate the temporary memory 230 corresponding to the sum. Since a detailed method of performing patching using the allocated temporary memory 230 has been described above with reference to FIG. 8, a detailed description thereof is omitted.
In the embodiment of FIG. 10, patch data including a plurality of files is patched using at least one of the memory 220 and the temporary memory 230. Accordingly, patching may be performed more rapidly. Here, the control unit 240 may perform patching using the memory 220 and the temporary memory 230 in parallel depending on its computation performance. That is, the control unit 240 may patch some files, each having a size smaller than the available space of the memory 220, using the memory 220 and, at the same time, patch the remaining files using the temporary memory 230. If patching is performed in parallel, the time it takes to perform patching can be considerably reduced.
In the disclosed technology, patching can be performed more rapidly and efficiently by maximizing the utilization of the resources of the patch client.
In the disclosed technology, a patch file having a high capacity can be patched rapidly and reliably.
In the disclosed technology, patching can be performed more rapidly using an optimized patch algorithm depending on the size of data to be patched.
In the disclosed technology, patching can be performed in a resource-efficient and error-tolerable manner by modifying only an erroneous part, rather than the entire file to be patched, if there is an error in a patch process.
Although the preferred embodiments of the present invention have been disclosed for illustrative purposes, those skilled in the art will appreciate that various modifications, additions and substitutions are possible, without departing from the scope and spirit of the invention as disclosed in the accompanying claims.

Claims (15)

  1. A patch method, the patch method being performed in a patch client, the patch client being connectable to a patch server and including a storage device and memory, the patch method comprising the steps of:
    (a) accessing the patch server and receiving patch data from the patch server;
    (b) calculating an available space of the memory;
    (c) if a size of the patch data is smaller than or equal to the available space of the memory, performing patching using the available space of the memory; and
    (d) if the size of the patch data is greater than the available space of the memory, allocating temporary memory of a capacity, corresponding to the size of the patch data, to the storage device, and performing patching using the allocated temporary memory.
  2. The patch method of claim 1, wherein the patch data comprises a hash value for at least one patch file.
  3. The patch method of claim 1, wherein step (b) comprises:
    (b1) checking a currently available space of the memory;
    (b2) calculating a memory space to be returned within a specific time; and
    (b3) calculating the available space of the memory by summing up the currently available space and the memory space to be returned.
  4. The patch method of claim 3, wherein step (b2) comprises the step of calculating the memory space to be returned within the specific time by multiplying the memory space to be returned by a probability value proportional to an expected return time of the memory space to be returned.
  5. The patch method of claim 1, wherein step (c) comprises the steps of:
    (c-1) partitioning the available space of the memory into first to third regions for an original file, the patch data, and a patch file, and then allocating the first to third regions to the original file, the patch data, and the patch file, respectively;
    (c-2) loading the original file and the patch data onto the first and the second regions, respectively; and
    (c-3) generating the patch file in the third region using the loaded original file and patch data.
  6. The patch method of claim 5, wherein step (c-2) comprises the step of loading the original file and the patch data using different memory loading methods.
  7. The patch method of claim 5, wherein step (c) further comprises the steps of:
    checking whether the step (c-2) has been successfully performed; and
    if an error has occurred only in specific data, loading only the specific data again.
  8. The patch method of claim 5, wherein the step (c-3) comprises:
    analyzing a patch table included in the patch data; and
    distinguishing at least part of the original file from at least part of the original file to be patched.
  9. The patch method of claim 1, wherein step (d) comprises the step of performing patching using the allocated temporary memory based on an output buffer memory pool method.
  10. The patch method of claim 1, wherein step (d) comprises the steps of:
    (d1) calculating a size of data to be changed in the original file; and
    (d2) allocating the temporary memory a capacity corresponding to the calculated size of the data.
  11. A patch server, the patch server being connected to a patch client and providing patch data, the patch server comprising:
    memory;
    a hash generation unit configured to generate at least one hash value for received data; and
    a control unit configured to load an original file and a patch file into the memory, to control the hash generation unit so that the hash generation unit compares the loaded original file with the loaded patch file and generates at least one hash value for a difference, to generate a patch table including the generated hash value, and to generate the patch data including the generated patch table.
  12. The patch server of claim 11, further comprising:
    information about the patch data, comprising at least one of information about a patch version of the patch data, information about a total size of the patch data, and information about a size of each of a plurality of files included in the patch data; and
    a patch data storage unit configured to store the patch data.
  13. A patch client, the patch client being able to use memory and a storage device, and accessing a patch server, receiving patch data and performing patching, the patch client comprising:
    a control unit for comparing a size of the received patch data with an available space of the memory, performing the patch using the memory if the available space of the memory is equal to or greater than the size of the received patch data, and allocating temporary memory of a capacity, corresponding to the size of the patch data, to the storage device and then performing patching using the allocated temporary memory if the available space of the memory is smaller than the size of the received patch data.
  14. The patch client of claim 13, wherein if the available space of the memory is equal to or greater than the size of the received patch data, the control unit partitions the available space of the memory into at least two regions, loads an original file and the patch data onto the respective partitioned regions, and performs the patching.
  15. The patch client of claim 13, wherein if the available space of the memory is smaller than the size of the received patch data, the control unit compares a size of an original file with the size of the patch data, previously measures a size of at least one part to be changed in the original file, and allocates temporary memory of a capacity, corresponding to the measured size, to the storage device.
PCT/KR2012/006613 2011-12-30 2012-08-21 Patch method using memory and temporary memory and patch server and client using the same WO2013100302A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2011-0147740 2011-12-30
KR1020110147740A KR101246360B1 (en) 2011-12-30 2011-12-30 Method for data patch using memory and temporary memory and patch server and client thereof

Publications (1)

Publication Number Publication Date
WO2013100302A1 true WO2013100302A1 (en) 2013-07-04

Family

ID=47728119

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2012/006613 WO2013100302A1 (en) 2011-12-30 2012-08-21 Patch method using memory and temporary memory and patch server and client using the same

Country Status (4)

Country Link
KR (1) KR101246360B1 (en)
CN (1) CN102945170A (en)
TW (1) TW201327168A (en)
WO (1) WO2013100302A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160380739A1 (en) * 2015-06-25 2016-12-29 Intel IP Corporation Patch download with improved acknowledge mechanism
US11797288B2 (en) 2019-04-17 2023-10-24 Huawei Technologies Co., Ltd. Patching method, related apparatus, and system

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002166304A (en) * 2000-11-30 2002-06-11 Ngk Spark Plug Co Ltd Throwaway cutting tool
WO2017084051A1 (en) * 2015-11-18 2017-05-26 深圳市大疆创新科技有限公司 External device management method, apparatus and system, memory, and unmanned aerial vehicle
CN107944021B (en) * 2017-12-11 2021-06-18 北京奇虎科技有限公司 File replacement method and device and terminal equipment
CN111179913B (en) * 2019-12-31 2022-10-21 深圳市瑞讯云技术有限公司 Voice processing method and device
CN112788384A (en) * 2021-02-07 2021-05-11 深圳市大鑫浪电子科技有限公司 Wireless digital television screen projection method and device, computer equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003150396A (en) * 2001-11-12 2003-05-23 Casio Comput Co Ltd Information processor and patch processing method
US20060130046A1 (en) * 2000-11-17 2006-06-15 O'neill Patrick J System and method for updating and distributing information
US20060136898A1 (en) * 2004-09-06 2006-06-22 Bosscha Albert J Method of providing patches for software
KR100670797B1 (en) * 2004-12-17 2007-01-17 한국전자통신연구원 Apparatus for real-time patching without auxiliary storage device and method therefor

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008198060A (en) 2007-02-15 2008-08-28 Seiko Epson Corp Information processor, patch code mounting system, electronic equipment, and patch code mounting method
US20100131698A1 (en) * 2008-11-24 2010-05-27 Tsai Chien-Liang Memory sharing method for flash driver
CN102215479B (en) * 2011-06-22 2018-03-13 中兴通讯股份有限公司 AKU is downloaded and method, server and the system of installation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060130046A1 (en) * 2000-11-17 2006-06-15 O'neill Patrick J System and method for updating and distributing information
JP2003150396A (en) * 2001-11-12 2003-05-23 Casio Comput Co Ltd Information processor and patch processing method
US20060136898A1 (en) * 2004-09-06 2006-06-22 Bosscha Albert J Method of providing patches for software
KR100670797B1 (en) * 2004-12-17 2007-01-17 한국전자통신연구원 Apparatus for real-time patching without auxiliary storage device and method therefor

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160380739A1 (en) * 2015-06-25 2016-12-29 Intel IP Corporation Patch download with improved acknowledge mechanism
US9780938B2 (en) * 2015-06-25 2017-10-03 Intel IP Corporation Patch download with improved acknowledge mechanism
US10153887B2 (en) 2015-06-25 2018-12-11 Intel IP Corporation Patch download with improved acknowledge mechanism
US11797288B2 (en) 2019-04-17 2023-10-24 Huawei Technologies Co., Ltd. Patching method, related apparatus, and system

Also Published As

Publication number Publication date
CN102945170A (en) 2013-02-27
TW201327168A (en) 2013-07-01
KR101246360B1 (en) 2013-03-22

Similar Documents

Publication Publication Date Title
WO2013100302A1 (en) Patch method using memory and temporary memory and patch server and client using the same
CN110297689B (en) Intelligent contract execution method, device, equipment and medium
US9262312B1 (en) Apparatus and methods to compress data in a network device and perform content addressable memory (CAM) processing
US20120296960A1 (en) Method and system for providing access to mainframe data objects in a heterogeneous computing environment
CN109379432A (en) Data processing method, device, server and computer readable storage medium
CN110399227B (en) Data access method, device and storage medium
US9306851B1 (en) Apparatus and methods to store data in a network device and perform longest prefix match (LPM) processing
CN104220987A (en) Application installation
CN111274288B (en) Distributed retrieval method, device, system, computer equipment and storage medium
CN112632069B (en) Hash table data storage management method, device, medium and electronic equipment
CN111708738A (en) Method and system for realizing data inter-access between hdfs of hadoop file system and s3 of object storage
WO2023096118A1 (en) Data input and output method using storage node-based key-value store
CN111984729A (en) Heterogeneous database data synchronization method, device, medium and electronic equipment
CN111176896A (en) File backup method and device and terminal equipment
CN113806301A (en) Data synchronization method, device, server and storage medium
WO2014181946A1 (en) System and method for extracting big data
US10732904B2 (en) Method, system and computer program product for managing storage system
CN110162395B (en) Memory allocation method and device
CN112636908B (en) Key query method and device, encryption equipment and storage medium
US9588884B2 (en) Systems and methods for in-place reorganization of device storage
CN106790521B (en) System and method for distributed networking by using node equipment based on FTP
US9471409B2 (en) Processing of PDSE extended sharing violations among sysplexes with a shared DASD
CN110874344B (en) Data migration method and device and electronic equipment
CN109740027B (en) Data exchange method, device, server and storage medium
WO2019062100A1 (en) Raid management method and device, and computer-readable storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12861578

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12861578

Country of ref document: EP

Kind code of ref document: A1