Enabling Process Control Technology for Automated Dry Media Depaint System
Field of the Invention This invention relates to automated dry-media blasting coating removal systems, more particularly to the automated starch media dry-stripping (SMDS) or the plastic media blasting (PMB) processes, and to real time computer-vision controllers for guiding and controlling the depainting nozzle(s) for optimal coating removal or stripping performances.
Background of the Invention
In the aircraft industry, it is necessary to remove an old layer of coating before repainting an aircraft part of or the entire aircraft. It is known that hundreds of pounds of paint may be used for painting an entire transport-size aircraft. If old coatings are not removed before repainting, the weight of the aircraft will be increased accordingly, and this will further increase the fuel consumption and affect the operation or the appearance of the aircraft.
Traditionally, chemical paint strippers have been used in the aircraft industry for depainting aircrafts before repainting. Increasing environmental awareness, worker health and safety concerns have led to significant efforts to find more acceptable ways to safely remove paint from aircrafts. The use of traditional harsh, toxic chemical paint stripper is being restricted by legislation in many parts of the world. The most promising alternate technology uses wheat starch as the stripping media. The wheat starch is used in the same way that sand is used in sand blasting. Wheat starch has proven to be an effective paint stripper without harming or degrading aluminum alloys and composite surfaces. On
certain aircrafts, the plastic media blasting (PMB) process has also been found acceptable.
Such a coating removal and depainting systems only work adequately when the depainting nozzle, which blasts dry particles such as the wheat starch media on the surface to be depainted, is guided and controlled at a specific angle with respect to the surface and at a constant speed. Sophisticated vision controllers are needed to enhance the performances of these blasting systems. In recent years, many vision controller products have appeared on the market, but each product has been designed for a specific application. It is therefore difficult to find an off-the-shelf vision system that satisfies the particular requirements of all stripping systems. Therefore, due to the unique requirements of the automated depaint process, the applicant developed its own system, which is the subject matter of the present invention.
Summary of the Invention
It is an object of the present invention to provide an automated depaint system capable of depainting aircraft using dry-stripping processes, including SMDS and PMB. The blasting nozzle used for projecting the high speed dry medium (e.g. the wheat starch medium) on the surface to be depainted may have a flat rectangular section rather than a circular one, in order to ensure efficient and uniform coating removal over the entire surface to be treated. Since such a nozzle requires to be manipulated at regulated and controlled speed and at particular distance and incidence angle with respect to the surface to be depainted, automatic controllers for holding, moving and guiding the blasting nozzle are required.
Thus, it is also an object of this invention to provide a computer-vision controller for guiding in realtime the nozzle on the surface to be depainted, using image information of that surface collected through at least one video camera installed on the nozzle carrier, herein after called the end-effector. The computer-vision system uses an edge-tracking method for determining the linear edge between the painted area and the depainted area in order to avoid depainting the same surface twice. Image color acquired by the cameras is used for assessing the paint stripping level for optimizing the traverse speed for the end-effector to obtain the desired depaint results. Another object of the computer-vision controller according to the present invention is to provide depainting quality information via the same camera. The control system is a closed-loop system so that the nozzle direction and speed may be adjusted in real time depending on the information provided by the cameras .
The main components of this automated system are the carrier, the Process Equipment Trailer (PET), the operator control station, and a robotic manipulator. The robot's arm is comprised of a travel beam, a 4 degrees of freedom serial link manipulator, and an end effector. The 4-degrees of freedom manipulator mounted to the travel beam provides the vertical movement and the compliance of the end effector to the aircraft surface. The travel beam mounted on the carrier provides lateral movement for the robot's arm/end effector. The end effector contains the blasting nozzle and the vacuum hose. The blasting nozzle delivers the stripping media and the vacuum hose recovers the used media and paint. Sensors mounted on the end effector are used to guide the robot in real-time through its desired stripping path. Among the sensors attached to the end effector there is at least one camera which is used to monitor the effectiveness of the stripping
process. The camera defines the acquisition system of the vision control system developed by the inventor for the automated depaint system. This vision system is a crucial part of the automation process as it provides the capability of edge tracking and quality control during the coating removal process. The edge tracking is used for strip-trace overlap control for step-down control at the end of a trace. Quality control is mainly used to control the degree of depainting. One of the main requirements which has been addressed in the vision system is the capability to remove coatings selectively, i.e. the removal of the top coat while leaving the primer.
Accordingly, it is a broad object of the present invention to provide an automated coating removal system for treating and stripping painted surfaces, said system comprising: a coat removal end-effector for treating and stripping painted surfaces; sensor means attached to said end-effector for acquiring surface information to be used for controlling and driving said end-effector along a particular path; blasting nozzle means for blasting media particles onto said painted surface, said blasting nozzle means being attached to said coating removal end-effector; vision controller means for analyzing said surface information and providing a control signal for controlling and driving said end-effector along said path; and robot means for driving said end-effector along said path according to said control signal provided by said vision controller means.
It is another broad aspect of the invention to provide an apparatus for processing a series of video image signals from at least one video camera and for
determining a position of a substantially linear boundary between a stripped and an unstripped portion of a painted surface, said video image signals comprising an intensity component and a color or hue component, said apparatus comprising: means for comparing a contrast quality of said intensity component and said color component of at least one of said image signals to determine whether said linear boundary is best defined by said intensity component or by said color component and to output an image type control signal; and means for selectively analyzing one of said intensity component and said color component of said video signals in response to said image type control signal to obtain a position value of said linear boundary.
Yet another object of the invention is to provide an apparatus for processing a series of video image signals from a video camera to generate a trajectory control signal for an at least partially automated paint stripping blast end effector robotic system, said apparatus comprising: means for mounting said camera forward of at least one blast nozzle of said end effector; means for analyzing said image signals to obtain a position signal of a substantially linear boundary between a stripped and an unstripped portion of a painted surface; and means for generating said trajectory signal from said position signal, whereby said robotic system tracks said boundary.
According to another broad aspect of the invention, there is provided An apparatus for processing a series of video image signals from a video camera and for determining a quality of a stripped portion of a painted
surface, said video image signals comprising an intensity component and a color or hue component, said apparatus comprising: means for comparing a contrast quality of said intensity component and said color component of at least one of said image signals to determine whether said quality is best determined by said intensity component or by said color component and to output an image type control signal; and means for selectively analyzing one of said intensity component and said color component of said video signals in response to said image type control signal to obtain a quality value of said depainted portion.
Brief Description of the Drawings
FIG. 1 represents a detailed view of the end- effector;
FIG. 2 shows the hardware block diagram of the vision system;
FIG. 3 represents the software block diagram of the vision system;
FIG. 4 shows the strip-trace overlap definition as used in the present specification; FIG. 5.A shows the intensity contrast between an aluminum panel and twelve samples of the most common colors used in the aircraft industry;
FIG. 5.B shows the color contrast between an aluminum panel and twelve samples of the most common colors used in the aircraft industry;
FIG. 6 shows the first step in the edge detection which is the three dimension display of the acquired image color intensity level;
FIG. 7 shows the gradient and filtering step in the edge detection;
FIG. 8 shows the gradient selection during the edge detection process;
FIG. 9 shows the final edge representation in the XY plane; FIG. 10 represents the color and intensity contrast levels for twelve overexposed top coats;
FIG. 11 represents the color an intensity contrast level for twelve underexposed top coats;
FIG. 12 shows the best situations for detecting the edge for 12 samples of surface;
FIG. 13 represents the processing time as a function of the processed area;
FIG. 14 shows the block diagram related to the color detection and quality control feature according to the preferred embodiment of the invention;
FIG. 15 shows the block diagram related to the edge tracking according to the preferred embodiment of the invention.
Description of the Preferred Embodiments
In an automated depaint process, machine vision is required to address two main issues: strip-trace overlap control and stripping process quality control. This invention discloses an automated SMDS vision controller, also called hereinafter an automated coating removal system, capable of satisfying both above requirements even when used in a constraint full environment specific to that field of activity.
The automated coating removal system disclosed by the present invention may comprise a carrier for carrying a Process Equipment Trailer (PET) on which a robot arm is mounted. The robot arm comprises an end-effector at its free extremity which is responsible for the effective paint stripping activity.
The present invention relates specifically to the computerized vision controller which is used for driving the end-effector. The vision controller ensure that the end-effector is driven through the right path so overlap between consecutive traces is avoided. It also performs depaint quality control by analyzing the color of the stripped trace so the speed of the end-effector may be adjusted for obtaining the desired coat removal and stripping quality. Figure 1 illustrates the end-effector 10 during a typical stripping operation. The end effector 10, which is mounted on the robot arm (not shown) is moved horizontally across the surface to be stripped 22. During the depaint process, the end effector 10 moves from left to right, then steps down at the end of the trace, and goes back from right to left. The dark zone in Figure 1 is the stripped zone and the light zone is the painted one. The blasting nozzle means may comprise two nozzles 16 and 18 located on the right side of the end effector 10 project wheat starch. The mixture's residues are collected with the vacuum hose 20 on the left side of the end effector 10.
If, for example, we have an aluminum surface painted with a white topcoat, then the line between the stripped area and the unstripped one is well defined. This line is referred to as the lateral edge. To be able to strip a second trace located under the first one, the vision controller must determine the real position of the lateral edge. This data is used by the robot's controller to ensure minimum overlap between traces. The same sensors and the same algorithms are used for vertical edge detection, lateral edge detection and starting position identification. Hence, this disclosure focuses on lateral edge detection.
The image acquisition part of the system, which is used to detect the color edge, may be composed of sensor means, such as two micro cameras 12 and 14 which are located at each end of the end-effector 10. Automatic switching between cameras 12 and 14 is performed based on the travel direction but, only one camera may be used as well.
The second problem addressed by the vision controller is quality control. For a constant pressure of wheat starch, and for a given surface, the stripping quality is a non-linear function of the robot's speed. During the stripping process, the vision system processes images acquired from the aircraft surface in order to define the speed required to maintain the quality of stripping.
Typically, for most aircraft, there are three different layers of paint: The topcoat, the primer which is used to increase the adhesion of the paint on the surface, and a chemical protection layer used against corrosion. In a depaint process, complete stripping refers to the removal of the top coat and the primer. Selective stripping refers to the removal of the top coat only.
From an image processing point of view, the quality control means the quantification of the level of primer and substrate seen by the camera. From these levels an appropriate analysis is performed to determine if the surface is perfectly stripped. For example, in the case of complete stripping, quality control defines if both top coat and primer are removed. Quality information generated is used to determine the stripping speed that leads to the desired performance.
Figure 2 illustrates the vision system's configuration. Shown in this figure is the image acquisition system composed by the two cameras 12 and 14,
the vision processing unit 24 composed of a vision station 26, a server 28 and a robot controller 30, and the link between the vision station 26 and the robot motion controller 30. Positioning and quality data generated by the vision station 26 is sent to the controller 30, where it is used to guide the end-effector 10 for edge tracking, and for stripping quality control.
The acquisition system shown in Figure 2 is composed of two cameras 12 and 14 used for edge tracking and quality control, and the lighting system for each camera. In order to define the sensor and the light source a spectral study is performed on different aircraft surfaces, and different colors representing top coat and primer. Cameras and the light source are housed in a box which is optimally designed to take into account the viewing distance, the position, as well as the size and weight constraints. To avoid the spectral reflections generated by the light source when illuminating on bright surfaces, polarized filters may be used. Each picture acquired by the camera shows a part of the stripped zone, and a part of the unstripped zone. The picture is then sent to the processing unit 24 for processing.
The flowchart of the vision system is illustrated in Figure 3. The processing unit 24 is a PC-based platform. As processing time is a major constraint, some dedicated image processing boards are used to perform pipeline processing. Only a part of the image processing is done on these boards. The analysis and other specific tasks performed with the image are done at the host level. The resulting data from the analysis may be sent to the robot's arm motion controller 30 through an Ethernet link.
The first step of the process is related to the digitization of the analog signal for both the quality and the lateral edge tracking tasks . These tasks may be
performed by two RGB 24bits grabbers 32, each one receiving one of the two interlaced signals 34 or 36 from one of the two cameras 12 or 14. Then, numerical conversion is performed on the digitized data by the A/N converters 38 and 40. Different image processing algorithms are applied on the data which is then sent from the dedicated hardware module 42 to the host 44. The edge coordinates and the quality information is deduced from this analysis. In addition to the image analysis tasks, the host 44 is also used for operator data display and for validation of the data sent to the motion controller 30.
The validity of the data is important. Even after all necessary precautions are taken into account, the rule of thumb in numerical vision is to consider potentially wrong data. To avoid a dangerous situation due to corrupted data, different validation measures are implemented at the image processing level and at the motion control level.
Edge tracking
Edge tracking is used to generate the necessary data in real-time in order to guide the robot during the coat removal and stripping process, and thus minimize overlaps between consecutive traces. Edge detection is also used between traces when the robot steps down, and at the beginning of the stripping for each zone. Edge detection during the step-down phase ensures that vertical edges between traces are aligned. Edge detection during the starting phase ensures that the stripping starts at the desired position and hence overlaps between zones are controlled.
The same sensors and the same algorithms are used for vertical edge detection, lateral edge detection, and
starting position identification. Hence, in this specification, we focus on lateral edge detection only.
Edge detection is useful to avoid a positive or negative overlap. Figure 4 shows the difference between these two overlaps. A positive overlap is defined by an unstripped zone located between two consecutive traces. A negative overlap is defined by a zone stripped twice at the junction of the two traces.
The edge being detected comes from a difference in contrast between two regions of the acquired image. The easiest way to detect this contrast is to use the intensity contrast level reflected by the aircraft surface. To improve the robustness of the detection, especially when the intensity contrast is not good enough, the contrast in the color scale may be used as an alternate parameter for detecting the edge. One of the main features of the present stripping controller is its capability of choosing one of the color information and the intensity information in order to use the chosen information for edge detection (the same choice is performed for quality control). The two colors of the stripped and the unstripped portions respectively are compared using comparing means of the coat removal system. The same procedure is applied to the intensity of the two portions, and the best contrast for this particular surface is chosen for being employed for further coat removal. The system continues controlling the blasting end-effector using the chosen information, in order to find the position value of the linear boundary between the two portions .
To illustrate this approach, Figures 5.A and 5.B show the average difference in intensity and in color between an aluminum stripped panel covered with the most common colors used by the aircraft industry. Figure 5.A shows the intensity contrast between an aluminum panel
(light column) and different classical topcoats (dark column). As illustrated in this figure, the detection becomes possible when we reach a difference of fifty or more, between two columns. For example, sample number six illustrates a non reflective light blue sample compared to an aluminum sample stripped of its paint. As shown in Figure 5.A, we have no contrast in intensity. However, Figure 5.B shows that the color information is better to detect the edge. The detection of a line between two consecutive traces of the edge detector 10 is referred to as edge detection or detection of a substantially linear boundary. According to the invention, the processing apparatus processes a series of video images acquired by the video cameras 12 and 14 in order to determine that boundary between the stripped and the unstripped portions. Figures 6 to 9 show the same picture acquired by one of the cameras 12 or 14 at different steps of the processing phase. These figures show the three dimensional plot of intensity level versus location in the image (x,y) plane.
Figure 6 shows a separation between a substrate and classical paint. The goal is to isolate the junction between the two regions . Figure 7 represents the gradient taken on the previous image after filtering. Figure 8 shows highest gradient selection and, finally, Figure 9 shows the final edge representation in the XY plane. That isolated line can be analyzed to determine if it is really an edge. At the end of the edge detection analysis, the coordinates of the edge extracted are validated. All these processing steps are summarized and shown in Figure 15, which represents the high level software diagram for the edge detection.
The light source level used has a large impact on the robustness of the system. Reflective or non-
SUBSTΓΓUTE SHEET(RULE 26)
reflective panels could generate, for a specific light level, an overexposed or underexposed image. Consequently, the operator may have to adjust the light source level in function of the topcoat characteristics. To reduce operator ' s interventions , the system must work on a specific range of lighting variations. Figures 10 and 11 show the difference in the behavior of the color and intensity contrasts for 12 top coats overexposed to light. Figure 10 shows that in case of an overexposed panel, samples five to twelve loose their intensity contrast easily. But the color contrast is still good to work with.
Figure 11 shows a light underexposure situation. It may be observed that for this underexposed situation, samples two to eight loose their intensity contrast. But the color contrast is still useful.
Figure 12 presents the same information differently. Each axis shows a sample with the color or intensity contrast level for one optimal light level. The goal, from an algorithm point of view, is to stay on the perimeter of the total surface drawn by the two curves in order to avoid any situation where no edge detection would be possible. The little circle shown in the center illustrates the dangerous zone that must be avoided.
Quality Control
Quality control is based on color detection. In order to detect a quality variation, the vision system 24 must learn different color mixtures. The learning process is done during the calibration period. In order to perform a quick and safe calibration, the end-effector 10 is commanded to strip twenty inches of surface using a constant acceleration. Hence, the trace shows a constant variation of quality. For example, for an aluminum panel, we will see a progression from completely stripped
aluminum to the topcoat. Then, this sample will be used to teach the vision system a minimum of ten variations along the trace. Using these ten variations, a mathematical model is built. During the real stripping process, the real picture is compared with the model in order to determine the level of primer or aluminum seen on the surface.
For the coat removal control too, the system uses the same selection between color information and intensity information of the two portions, respectively the from the stripped portion and the unstripped portion of the surface to be depainted. This procedure has been described in greater details in the previous section. It allows the selection of the best information, either the color or the intensity information, provided by the surface to the vision controller which uses it to compute and control the coat removal quality of the blasting nozzles, by adjusting the speed of the end-effector.
The simplified quality control software diagram is shown in Figure 14. First, an image of the stripped surface is acquired by one of the cameras 12 or 14, depending on the direction of stripping. The image is sent through cables to the vision station 26 where it is converted from analog to digital by the A/N converter 40. From here, the intensity contrast or the color contrast is digitized and is compared to a series of samples previously recorded by the system. The best mach is found and is used to adjust the speed of the end-effector, controlling at the same time the quality of the stripping process performed on the aircraft surface.
Figure 13 shows the processing time as a function of the size of the image processed, for both quality and edge detection. Obviously, the larger the picture, the longer the processing time required.
The vision controller disclosed by the present invention is integrated within the overall coating removal system. As previously presented, the first task of this system is to generate data required to modify the real-time trajectory of the robotics arm so that minimum overlap is obtained between consecutive traces. The second task of the vision controller is to provide quality information used to define the stripping speed so that the desired quality of the stripping is obtained.