![]() METHOD FOR ANALYZING A STRUCTURAL DOCUMENT THAT CAN BE DEFORMED
专利摘要:
The invention relates to a method for analyzing a structured document that can be deformed from a first image and a second image of the document, comprising steps of: • mapping of first points of interest extracted in the first image with second points of interest of a reference image showing a model of the document, • estimation of a first geometrical transformation taking into account deformations of the structured document shown in the first image with respect to the model from correspondences, • determining at least a first region to be analyzed in the first image, by projecting at least one reference region of the reference image by means of the first transformation, • analysis of the content of the first region determined, • matching of third points of interest extracted in the second image with fourth points of interest extracted in the first image, • estimation, from the correspondences made in the preceding step, of a second geometrical transformation taking into account deformations of the document shown in the second image with respect to the document shown in the first image, • estimation of a third geometrical transformation taking account of deformations of the document shown in one of the two images, said target image, with respect to the first model shown in the reference image, the third transformation depending on the second transformation, minus a second region to be analyzed in the target image by projection of the reference region of the reference image by means of the third geometrical transformation, • analysis of the content of the second determined region. 公开号:FR3064782A1 申请号:FR1752725 申请日:2017-03-30 公开日:2018-10-05 发明作者:Nicolas Laurent Dura Jeremy;Patrice Rostaing Laurent;Alain Rouh 申请人:Safran Identity and Security SAS; IPC主号:
专利说明:
® FRENCH REPUBLIC NATIONAL INSTITUTE OF INDUSTRIAL PROPERTY © Publication number: 3,064,782 (to be used only for reproduction orders) ©) National registration number: 17 52725 COURBEVOIE ©) Int Cl 8 : G 06 K 9/00 (2017.01), G 06 T 3/00 A1 PATENT APPLICATION ©) Date of filing: 30.03.17. © Applicant (s): SAFRAN IDENTITY & SECURITY (30) Priority: Simplified joint stock company - FR. ©) Inventor (s): DURA JEREMY, NICOLAS, LAURENT, ROSTAING LAURENT, PATRICE and ROUH (43) Date of public availability of the ALAIN. request: 05.10.18 Bulletin 18/40. ©) List of documents cited in the report preliminary research: Refer to end of present booklet (© References to other national documents ©) Holder (s): SAFRAN IDENTITY & SECURITY related: Joint stock company. ©) Extension request (s): ©) Agent (s): REGIMBEAU. Pty METHOD OF ANALYSIS OF A STRUCTURED DOCUMENT LIKELY TO BE DEFORMED. FR 3 064 782 - A1 The invention relates to a method for analyzing a structured document capable of being distorted from a first image and from a second image of the document, comprising steps of: matching of the first points of interest extracted in the first image with second points of interest of a reference image showing a model of the document, estimation of a first geometric transformation taking into account deformations of the structured document shown in the first image relative to the model from the correspondences, determination of at least a first region to be analyzed in the first image, by projection of at least one reference region of the reference image by means of the first transformation, analysis of the content of the first determined region, matching of third points of interest extracted in the second image with fourth points of interest extracted in the first image, estimation, from the correspondences made in the previous step, of a second geometric transformation taking into account deformations of the document shown in the second th image compared to the document shown in the first image, estimation of a third geometric transformation taking into account deformations of the document shown in one of the two images, called the target image, compared to the first model shown in the reference image , the third transformation depending on the second transformation, determination of at least a second region to be analyzed in the target image by projection of the reference region of the reference image by means of the third geometric transformation, analysis of the content of the second specified region. DOAAAINE OF THE INVENTION The present invention relates to a method for analyzing the content of a structured document capable of being distorted. STATE OF THE ART A structured document such as an identity document, a game ticket, proof of address, an invoice, a form, etc., is generally generated on the basis of a document template. The structured document thus includes predetermined generic information already present in the model, but also personalized information. For example, in a passport, the first and last name of the passport holder constitutes personalized information. It is well known to analyze a structured document by acquiring an image of such a document and analyzing the content of this acquired image. However, such an analysis is made difficult when the document analyzed is distorted, for example wrinkled. Methods, such as that described in document FR 2952218, propose to take into account deformations in the image of a document submitted by software and projection of structured light, in order to determine an image of a virtual document identical to the actual document but which would not be distorted. For this, points of interest of a two-dimensional grid projected on the document are mapped to the corresponding points in the crumpled document, and a deformation of the document is calculated based on these mappings. However, such methods have the disadvantage of requiring the use of a grid projection system. Similarly, the method proposed in document WO2011 / 058418 proposes to correct a perspective transformation of an acquired image showing a document to be analyzed. To correct such a transformation, points of interest extracted from the acquired image are matched with predetermined points of interest from a reference image showing a document model. A homography of the image of the document is then calculated which is compatible with these maps of points of interest. However, it is necessary to predetermine a large number of points of interest in the reference image so that a large number of mappings can be carried out and so that the transformation is corrected precisely. In other words, the method described in document WO2011 / 058418 requires a high computational load to accurately characterize the deformations undergone by the document. STATEMENT OF THE INVENTION An object of the invention is to analyze a deformed structured document by means of a method which does not require a grid projection system, and which is less costly in terms of calculation compared to the prior art methods. , with equal precision. A method is therefore proposed for analyzing a structured document capable of being distorted from a first image and from a second image of the document, comprising steps of: • matching of the first points of interest extracted in the first image with second points of interest of a reference image showing a model of the document, • estimation of a first geometric transformation taking into account deformations of the structured document shown in the first image relative to the model from the correspondences, • determination of at least a first region to be analyzed in the first image, by projection of at least one reference region of the reference image by means of the first transformation, • analysis of the content of the first determined region, • mapping of third points of interest extracted in the second image with fourth points of interest extracted in the first image, • estimation, from the correspondences made in the 'previous step, of a second geometric transformation taking into account d deformations of the document shown in the second image compared to the document shown in the first image, • estimation of a third geometric transformation taking into account deformations of the document shown in one of the two images, called the target image, compared to the first model shown in the reference image, the third transformation depending on the second transformation, • determination of at least one second region to be analyzed in the target image by projection of the reference region of the reference image by means of the third geometric transformation, • analysis of the content of the second determined region. The steps of matching points and estimating geometric transformation are applied to the two acquired images showing the same document to be analyzed. Points of interest belonging to regions with personalized content can thus be matched; this constitutes a source of additional information to better characterize the deformations undergone by the structured document considered, and consequently to refine the subsequent analysis of the content of the document. The proposed process can be completed using the following optional characteristics, taken alone or in combination when technically possible. The first image can be acquired while the document is illuminated by light radiation in a first band of wavelengths, and the second image can be acquired while the document is illuminated by light radiation in a second band of wavelengths waves, the second wavelength band being different from the first wavelength band or the second image being acquired after the first image. The first wavelength band may be in the visible range, and the second wavelength band may be in the infrared range, for example the near infrared range, or in the ultraviolet range. The method can also be implemented from a third image acquired while the document is illuminated by light radiation in a third band of wavelengths different from the second band of wavelengths, in which case the second wavelength band is in the infrared range, the third wavelength band is in the ultraviolet range, and the method includes determining at least a third region to be analyzed in the third image, and analyzing the content of the third specified region. The acquired images can show the document from different viewing angles. The acquired images can be acquired successively by means of the same objective, or else be acquired by distinct objectives. The method may further comprise, for a plurality of predetermined regions of interest of the first reference image, an estimation of a local geometric transformation specific to the region of interest, from the second points of interest located in the region of interest and the first points of interest with which these second points of interest have been mapped, in which the first geometric transformation is also estimated from the local geometric transformations specific to the regions of interest of the first image reference. The method may further include merging the results provided from each analysis step. The merge may include document authentication, in which authentication succeeds provided that information is found during the analysis of at least one of the acquired images, and not found during the analysis of at least minus another of the acquired images. Each analysis can include character recognition, and in which the merge includes an arbitration between the characters recognized during the analysis steps. The target image may be the second image acquired. In this case, the third transformation can be a composition of the first geometric transformation and the second geometric transformation. Alternatively, the process can include the following steps: • matching of fifth points of interest extracted in the second acquired image with sixth points of interest of a second reference image showing a second model of the document, • estimation, from the correspondences made in the previous step , a fourth geometric transformation taking into account deformations of the document shown in the second image acquired compared to the second model, in which: • the target image is the first acquired image, • the third transformation is a composition of the inverse of the second transformation, of the fourth transformation, and of a predetermined fifth geometric transformation taking account of deformations of the second model compared to the first model. The method can also comprise, for a plurality of region of interest of the second reference image, an estimation of a local geometric transformation specific to the region of interest, from the sixth points of interest located in the region of interest and of the fifth points of interest with which these fifth points of interest have been mapped, in which the fourth transformation is also estimated from the local geometric transformations specific to the regions of interest of the second reference image . According to a second aspect of the invention, there is proposed a computer program product comprising code instructions for the execution of an analysis method according to the first aspect of the invention, when this program is executed by a processor. . According to a third aspect of the invention, there is also proposed a device for analyzing the content of a structured document capable of being deformed, the device comprising an interface for receiving a first image showing the document and a second image showing the document, and an image processing module configured to: • to match first points of interest extracted in the first acquired image with second points of interest of a first reference image showing a first model of the document, • to estimate a first geometric transformation taking into account deformations of the structured document shown in the first image acquired compared to the first model, from the correspondences made in the previous step, • determine at least one first region to be analyzed in the first image acquired, by projection of at least one reference region from the first reference image by means of the first transformation, • analyze the content of the first determined region, • match third points of interest extracted in the second acquired image with fourth points of interest extracted in the first acquired image, • estimate, from correspondence es performed in the previous step, of a second geometric transformation taking into account deformations of the document shown in the second acquired image compared to the document shown in the first acquired image, • estimate a third geometric transformation taking into account of deformations of the document shown in one of the two acquired images, called the target image, with respect to the first model, the third transformation depending on the second transformation, • determining at least one second region to be analyzed in the target image by projection of the reference region of the first reference image by means of the third geometric transformation, • analyze the content of the second determined region. DESCRIPTION OF THE FIGURES Other characteristics, objects and advantages of the invention will emerge from the description which follows, which is purely illustrative and not limiting, and which should be read with reference to the appended drawings in which: • Figure 1 shows a document structured in a distorted state. FIG. 2 schematically illustrates a device for analyzing a structured document, according to one embodiment, comprising an image acquisition module. • Figure 3 schematically illustrates an image acquisition module according to another embodiment. • Figure 4 shows an example of a reference image showing a structured document model. FIG. 5 is a flow diagram of steps of a method for analyzing a structured document, according to a first embodiment of the invention. FIG. 6 shows the reference image of FIG. 4, a first acquired image showing a structured document, and correspondences of points between the two images carried out during a step of the process of FIG. 5. FIG. 7 shows, in addition to the images of FIG. 6, a second acquired image showing a structured document, and correspondences of points between the two acquired images, carried out during another step of the method of FIG. 5. • Figure 8 schematically shows images used during the implementation of the method of Figure 5, as well as geometric transformations between these images. FIG. 9 is a concrete example of a reference image capable of being used during the implementation of the method of FIG. 5. FIG. 10 is a concrete example of an image acquired during the implementation of the method of FIG. 5. FIG. 11 is a flow diagram of steps of a structured document analysis method, according to a second embodiment of the invention. FIG. 12 schematically shows images used during a structured document analysis process according to another embodiment of the invention, as well as geometric transformations between these images. In all of the figures, similar elements bear identical references. DETAILED DESCRIPTION OF THE INVENTION A. Structured Document A structured document is a document in which at least part of the information of interest to be analyzed in the document is written in one or more predetermined areas of known and fixed location for all documents of the same type, as opposed to a document on free paper. The information carried by the document can be in the form of handwriting as well as block letters or even graphic elements such as images or logos. Such documents are, for example, identity documents such as passports, identity cards or driving licenses, forms, invoices, receipts, multiple choice exams or game tickets. For each of these types of documents at least one document model is likely to be defined. We find in a structured document of a particular model two types of regions of interest. Some of these regions of interest have predetermined content, which can be found on all documents of the same model, and others, personalized content, that is to say variable in documents of the same model. The regions of interest are systematically found in the same place with the same content in different structured documents from the same model, on the assumption that these documents are in an undistorted state. A region of interest can include one or more predetermined graphic elements (image, logo or character string in a predetermined font). By way of example, a structured passport-type document is shown in FIG. 1. This document notably includes a first region of interest comprising the character string "surname" in block letters. This first region has a predetermined content in the sense that it will be found on all other passports of the same model. This document also includes a second personalized region of interest containing the surname of the person holding the passport (here "Donatien"). A document structured in perfect condition is generally flat, without folds. However, the passport shown in Figure 1 is in a deformed state, more precisely crumpled. B. Structured document analysis system With reference to FIG. 2, a device 1 for analyzing a structured document comprises an image acquisition module 2 of the structured document, and a processing module 4 for the images acquired by the acquisition module 2. The image acquisition module 2 can be configured to acquire images of different types. The image acquisition module 2 is configured to produce at least one image of a first type, carrying information on the document to be analyzed when the document is illuminated by a light source in the visible range S v . The light source in the visible range S v is a light source generating light radiation having at least one wavelength in the range from 380 nanometers to 780 nanometers. The image acquisition module 2 can include such a light source in the visible range S v so as to illuminate the document to be analyzed. As a variant, the image acquisition module 2 does not include such a light source in the visible range, and the document is simply subjected to surrounding lighting. The image acquisition module 2 also includes a sensor D v sensitive to the visible wavelength emitted by the source in the visible domain S v . Furthermore, the image acquisition module 2 is configured to produce at least one image of a second type, carrying information on the document to be analyzed when the document is illuminated by an infrared (IR) source, ie that is to say a light source generating a light radiation having at least a wavelength greater than 780 nanometers, for example in the near infrared range (from 780 nm to 3 pm). The image acquisition module 2 then comprises such an infrared source S IR and moreover an infrared sensor D, R sensitive to the infrared wavelength emitted by the infrared source Sir · Furthermore, the image acquisition module 2 is configured to produce at least one image of a third type, carrying information on the document to be analyzed when the document is illuminated by an ultraviolet (UV) source, ie that is to say a light source generating light radiation in the ultraviolet range (radiation having at least a wavelength less than 380 nanometers). The image acquisition module 2 then comprises such a UV source, which we denote by S uv . By abuse of language, the images according to the first type, second type and third type are hereinafter called respectively “visible images”, “infrared images” and “UV images”. In the embodiment illustrated in FIG. 2, the image acquisition module 2 comprises three image capture units each comprising an objective defined by an optical axis: a first unit 6 comprising a first objective O v for acquiring visible images, a second unit 8 comprising a second O IR objective for acquiring infrared images, and a third unit comprising a third O uv objective for acquiring UV images. The three types of image can thus be produced in parallel by the three image capture units 6, 8, 10. In another embodiment illustrated in FIG. 3, the acquisition module 2 comprises a single objective O which is used for the acquisition of several types of images, for example the three types of images. This image acquisition module 2 is configurable in an acquisition mode in the visible domain, in an acquisition mode in the infrared domain and in an acquisition mode in the ultraviolet domain. The image acquisition module 2 typically includes an infrared filter movable between an active position and an inactive position. The infrared filter is adapted to selectively conserve at least at least one infrared wavelength emitted by the infrared source, and to eliminate any wavelength in the visible range. When the image acquisition module 2 is configured in the visible acquisition mode, the images acquired by the image acquisition module 2 are classified as visible images. The light source in the visible domain possibly integrated into the image acquisition module 2 is activated in the visible acquisition mode. When the image acquisition module 2 is configured in the infrared acquisition mode, the infrared source is activated, and the infrared filter is positioned in the active position. The infrared radiation emitted by the source is projected towards the document to be analyzed. When the document includes certain infrared-sensitive materials (this is typically the case with certain inks), the infrared radiation is reflected by the surface of the document, then is received by the objective of the image acquisition module 2 However, the viewing device simultaneously captures visible radiation if the document is not plunged into darkness. The radiation received passes through the infrared filter; only the infrared wavelengths are retained in the radiation obtained at the filter outlet. An infrared image is then produced based on the filtered radiation. When the image acquisition module 2 is configured in the ultraviolet acquisition mode, the UV source is activated. The infrared filter is also positioned in the inactive position. The UV radiation emitted by the UV source is projected towards the document to be analyzed. The document is likely to present certain information invisible to the naked eye, but which becomes visible when these defects are illuminated by UV radiation. The incident UV radiation undergoes a wavelength shift towards the visible range, so that the radiation reflected by the document comprises at least one wavelength in the visible range carrying information on these defects. This reflected radiation is received by the camera, and an image of the third type is produced based on this received radiation. As the infrared filter is in its inactive position, the received radiation is not filtered by the infrared filter. Such information that is invisible to the naked eye is used, for example, to make it more difficult to copy structured official documents, such as an identity card or passport. Ultimately, the image acquisition module 2 is configured to produce at least one type of image from the three types of images mentioned above. It will be seen below that these different images, carrying different but complementary information, can ίο be advantageously combined to improve the accuracy of the analysis of the structured document. Returning to FIG. 2, the image processing module 4 comprises at least one processor 12 configured to implement image processing algorithms which will be detailed below. The image processing module 4 also comprises at least one memory 14 for storing data, in particular images received from the acquisition module or images resulting from the image processing implemented by the processor. The memory 14 stores in particular at least one reference image showing a document model structured in an undeformed state. The document model shown in a reference image includes several regions of interest of predetermined content. Points of interest associated with the region are also stored in the memory, for each region of interest of predetermined content. Each point of interest is for example defined by a pair of coordinates making it possible to locate it in the reference image. On the other hand, points of interest relating to personalized regions of interest are not identified in the reference image, for the simple reason that the model shown in this reference image does not include such regions of interest personalized. By way of example, FIG. 4 illustrates a reference image M1 having four regions of interest of predetermined content Zr1, Zr2, Zr3, Zr4, of rectangular shapes. For example, four points of interest are stored for each of these three regions of interest, corresponding to the four corners of these regions, it being understood that other points of interest located within these regions can be used. The reference image M1 is for example a visible image having been acquired beforehand by the image acquisition module 2, or by an equivalent device. As will be seen below, other reference images can also be stored in the memory, each reference image is associated with a type of image capable of being acquired by the image acquisition module 2. The memory can thus store at least one visible image (including the image M1) and / or at least one infrared reference image and / or at least one UV reference image. C. Method for analyzing the content of a structured document Several embodiments of a document analysis method such as that represented in FIG. 1 will be described in the following, by means of the device according to one of FIGS. 2 or 3. C.1. First embodiment of the method using a reference image and two acquired images The steps of an analysis method according to a first embodiment are illustrated in the flow diagram of FIG. 5. The image acquisition module 2 acquires a first visible image A1. The first acquired image A1 is stored in memory 14 of the image processing module 4. With reference to FIGS. 6 and 7, the first acquired image A1 shows the document and more precisely regions of interest Za1, Za2, Za3, Za4, Za10, Za11. Some of these regions (the regions Za1, Za2, Za3, Za4) have a predetermined content similar to the regions of interest Zr1, Zr2, Zr3, Zr4 of the reference image M1. As can be seen in FIG. 6, the deformations of the document to be analyzed are such that the regions of interest of predetermined content Za1 to Za4 do not appear in exactly the same form, orientation and / or position in the image A1 as the corresponding regions of interest Zr1 to Zr4 in the reference image M1. When several reference images associated with the “visible” image type are stored in the memory, one of them is selected (step E2). It is assumed in what follows that the image M1 is selected as the visible reference image. The image processing module 4 determines in the acquired image at least one region of interest (step E4). The image processing module 4 extracts points of interest in the acquired image A1, according to a method known from the state of the art (step E5). The image processing module 4 maps points of interest of the region of interest Zr1 of the image M1, stored in the memory, with points of interest extracted in the image A1 (step E6) , according to a method known from the state of the art. For example, the point P1A1 located in the zone Za1 is mapped to the point P2M1 located in the zone Zr1. We note by convention Ci all the connections made for the Zri region. This set C1 is representative of local deformations in the region Zr1, undergone by the document to be analyzed with respect to the model shown in the reference image M1. Optionally, the image processing module 4 further estimates a local geometric transformation specific to the region of interest Zr1, from the matches C1. We note by convention Ti the geometric transformation carried out for the Zri region. The transformation Ti is calculated for example from: An affine model determining an affine application connecting the points of interest of the Zai region of the image A1 acquired and of the points of interest of the Zri region of the reference image M1, such as a translation, a rotation or a homothety or a scaling, or a combination of these transformations. Such a model makes it possible to preserve the alignments of points and the ratios of distances between points of the document, • of a homographic model determining a homographic application connecting the points of interest of the image A1 acquired and the points of interest of the reference image M1. Such a model makes it possible to correspond a plane of a planar surface seen by a camera with the plane of the same surface in another image, • of an interpolation model determined using a weighting algorithm by inverse distance, as proposed in the document "Franke R., 1982, Scattered data interpolation: tests of some methods, mathematical of computation, 38 (157), 181 200", and / or interpolation by splines. A Ti transformation provides more complete deformation information than a set Ci of mappings of points of interest alone. The above steps E4, E5, E6 are repeated for each region of interest Zri of the model shown in the reference image M1. C2, C3, C4 and the associated transformations T2, T3, T4 are also thus obtained, as shown in FIG. 6. In the case of complex deformations (such as the crumpling shown in FIG. 1), the transformations T1 to T4 are of course different from one another. From the C1-C4 correspondences, and if necessary from the corresponding T1-T4 transformations, the image processing module 4 estimates a global geometric transformation TM1A1 taking into account deformations of the structured document shown in the first acquired image compared in the first model, according to a method known from the state of the art (step E8). The transformation TM1A1 is global in the sense that it covers the complete model shown in the reference image and the document shown in the acquired image A1. For example, the TM1A1 transformation can be determined using an inverse distance weighting or spline interpolation algorithm. The global transformation TM1A1 can be seen as a piecewise continuous function on a definition domain of dimension 2 corresponding at least to the whole of the model shown in the reference image A1 towards an image domain also of dimension 2 corresponding at least to the entire structured document shown in the acquired image A1. The definition domain is the set of pixels of the reference image M1. The image of each pixel of the image M1 by this transformation TM1A1 is a pixel of the acquired image A1. The calculation applied by this global transformation TM1A1 is different depending on the location of the pixel of the reference image considered. For example, the global transformation uses the local transformation Ti to transform a pixel located in the region of interest Zri. On the other hand, a pixel located outside any area of interest Zri shown in the reference image is transformed by the transformation TM1A1 for example by means of the local transformation Ti of the region Zri closest to this pixel. The geometric transformation TM1A1 characterizes deformations such as creases, which are much more complex deformations than simple deformations due to the perspective induced by a bad positioning of the processed document, for example in a plane inclined relative to the plane of taking image, as is the case of the method described in document WO2011 / 058418. We will see below that other global geometric transformations are estimated on the basis of pairs of different images showing the structured document or a model associated with it. By convention, we call "TXiYj" a global geometric transformation taking into account deformations of a document shown in the image "Yj" compared to a document shown in the image "Xi". The image processing module 4 then determines at least one region to be analyzed in the first acquired image A1, by projection of at least one reference region of the reference image M1 by means of the first transformation TM1A1 (step E9 ). A reference region is for example one of the regions of interest of the reference image M1. The image processing module 4 then analyzes the content of the region determined during step E9 according to a known method (step E10): character recognition (OCR), etc. It can be seen that the personalized regions of interest Za10 and Za11 shown in the acquired image A1 have not been used to date. To take advantage of these personalized regions of interest, a second image A2 showing the same structured document is acquired by the image acquisition module 2 (step ET). The second image A2 can be an image of the same type as the first image (therefore an image acquired with the same lighting conditions as when the image A1 was acquired), but in this case showing the document from an angle different view. This different angle of view can be obtained by moving a lens of the acquisition module between the acquisitions of images A1 and A2 (this displacement being natural when the acquisition module is embedded in a portable mobile terminal). Alternatively, the second image A2 may be of a different type from that of the first acquired image A1 (infrared or UV). In this case, the second image may show the document from a view angle identical or different from the image A1. In what follows, we assume that the image A2 is an infrared image. The second acquired image A1 is stored in memory 14 of the image processing module 4. The image processing module 4 extracts points of interest in the second acquired image A2 (step E11), according to the same method as that used during step E5. It is possible that points of interest contained in the regions of interest Za10 or Za11 in the first image acquired were extracted during step E5. However, these POIs were not mapped to POIs from the M1 reference image because these regions have personalized content. The image processing module 4 matches points of interest extracted in the second acquired image A2 with these points of interest extracted in the first acquired image A1 remaining unused during step E11. In other words, it is possible that at least one of the points of interest from the image A1 used here is different from each of the points of interest from the same image A1 mapped to a point of l reference image M1. In the example illustrated in FIG. 7, the point P3A1 located in the zone Za11 shown in image A1 is mapped to the point P4A2 located in the zone Za11 shown in image A2. Furthermore, matching sets C10, C11 corresponding to the regions Za10, Za11, respectively, are obtained between the images A1 and A2. These two sets are based on points which were not extracted during step E5. In addition, the C20 mapping set for the Za2 area is obtained. To constitute this set, points extracted during step E6 may have been reused. In the same way as during step E6, the image processing module 4 estimates, from the correspondences carried out in the previous step E11, a geometric transformation TA1A2 taking account of deformations of the document shown in the second acquired image A2 relative to the document shown in the first acquired image A1. This makes it possible to characterize the deformations undergone by regions of interest of the structured document with personalized content, which could not be done by means of the reference image M1, which only identifies regions of interest of predetermined content. In particular, the following data are taken into account to estimate this geometric transformation TA1A2: • the sets C10, C11, C20 • optionally corresponding local geometrical transformations T10, T11, T20, calculated in accordance with the method described above, shown in FIG. 7. The image processing module 4 then estimates a geometric transformation taking into account deformations of the document shown in one of the two acquired images A1 or A2, called the target image, relative to the first model, the transformation depending on the second transformation TA1A2 (step E13). Image A2 is used as the target image during step E13. The transformation estimated during step E13 is the composition TM1A2 of the geometric transformations TM1A1 and TA1A2: ΓΜ142 = TM1A1 ° 774142 The image processing module 4 then determines at least one region to be analyzed in the target image A2 by projection of the same reference region as that determined during step E9, by means of the geometric transformation TM1A2 estimated at during step E13. The image processing module 4 then analyzes the content of the region determined in the image A2 (step E15), for example in the same way as during step E10. The analyzes of the acquired images A1, A2 (steps E10, E15) can comprise the same processing, for example a recognition of characters (OCR), or of predetermined patterns, known in themselves. The results of the analyzes of the different acquired images A1, A2 are then combined in a merging step (step E18). For example, the fusion E18 includes an authentication of the structured document, the result of which depends on the analyzes of the acquired images A1, A2. Typically, when the reference region or regions is a region containing a security motif selectively revealed by infrared illumination of the structured document, the authentication succeeds provided that this security motif is detected in the infrared image A2 during the analysis E15, but not detected in the visible image A1 during the analysis E10: the structured document is then considered to be authentic. Otherwise, the authentication fails: the structured document is then considered to be inauthentic. Alternatively or additionally, the merger includes an arbitration between characters detected during the analyzes E10, E15. This can be implemented when the structured document comprises a character string visible in each of the acquired images A1, A2. Suppose for example that the image processing module 4 detects in the image A1 the character string ABRUCADABRA, and in the image A2 the character string ABRACADABRU, while the character string which actually appeared in the document shown in these images is ABRACADABRA. A detection confidence index is produced for each character. For example, is associated with the final "A" detected in image A1 a confidence index of value greater than the confidence index associated with the final "U" detected in image A2; the "A" is therefore retained. Similarly, the “U” in the fourth position of the character string detected in image A1 is associated with a confidence index lower than the confidence index associated with the character “A” of the same position detected in image A2: this A in the fourth position of the character string from A2 is retained. Here, the E18 merging step has the effect of consolidating the results of the two analyzes synergistically. In the embodiment of the method presented above, a single reference image M1 was advantageously used to determine the same reference region to be analyzed in the acquired images A1, A2. In this regard, it should be noted that the estimation of the transformation TA1A2 between the acquired images A1 and A2 is relatively simple to implement, and that there is no direct mapping of points of interest. between images M1 and A2, as shown diagrammatically in FIG. 8. In fact, the transformation TA1A2 can be a homography whose approximation can be predetermined and be a function of the way in which the image acquisition module 2 is designed . Therefore, the cumulative processing time to estimate the TM1A1 and TM1A2 transformations is reduced. By way of example, an example of an M1 reference image is shown in FIG. 9 showing a structured document of the passport type. FIG. 10 shows an example of an image acquired A2 in the infrared domain. It should be noted that non-personalized fields do not appear in the acquired image A2. In this way, the transformation TM1A2 cannot be estimated directly on the basis of the image A2 in a satisfactory manner. C.2. Second embodiment of the method using a reference image and three acquired images The steps of a second embodiment of the method are shown in the flow diagram of FIG. 11. In this second embodiment, at least a third image A3 can also be acquired by the image acquisition module 2 (step E1 "). The third image A3 is for example a UV type image, constituting a source of additional information on the deformations of the document to be analyzed. In this second embodiment, the images A1 and A3 were acquired by means of the same objective at close times; it is assumed that the two images A1 and A3 show the structured document from the same angle of view. Consequently, the image processing module 4 then determines at least one region to be analyzed in the image A3 by projection of the same reference region as that used during the steps E9 and E14, by means of the geometric transformation TM1A1 estimated (step E16). As a variant, if the images A1 and A3 do not show the structured document from the same angle of view, the processing steps implemented by the image processing module 4 on the basis of the image A2 can be repeated to the third image A3, so as to obtain a geometric transformation TM1A3 resulting from the composition of the transformation TM1A1 already discussed and of a transformation TA1 A3 between the acquired images A1 and A3. The image processing module 4 can then then determine at least one region to be analyzed in the image A3 by projection of the same reference region as that used during the steps E9 and E14, by means of the estimated geometric transformation TM1A3 . The image processing module 4 then analyzes the content of the determined region (step E17), for example in the same way as during step E10 or E15. The merging step 18 is then carried out not on the basis of two images, but of the three images A1, A2 and A3, according to the methods described above. As in the first embodiment, the method according to the second embodiment uses a single reference image M1 to determine the same reference region to be analyzed in the acquired images A1, A2. In addition, the UV image constitutes a source of additional information making it possible to improve the fusion processing carried out during step 18. C.3. Third embodiment of the process using two reference images and two acquired images In a third embodiment, the general principle of which is schematically illustrated in FIG. 12, another reference image M2 is used, in addition to the reference image M1. In addition, image A1 is used as the target image. The second reference image M2 is of the same type as the second acquired image Α2 (of the infrared type in the preceding examples). The second reference image M2 is for example acquired during a preliminary step using the acquisition module, under the same lighting conditions as those used for the acquisition of the image A2. Similar to the first reference image M1, the second reference image M2 shows a model of the structured document having regions of interest of predetermined content. However, these regions of interest are not necessarily the same as those shown in the reference image M1. The model shown in image M2 therefore relates to the same structured document, but may be different from the model shown in image M1 in terms of content. The second reference image M2 comprises at least one region of interest of predetermined content not present in the image M1. This region, for example, has content only revealed by infrared, such as a security feature. For example, information may have been printed or written on the document to be analyzed using visible but optically variable ink, and other information using infrared ink not visible to the naked eye but revealed in an infrared image, such an ink being traditionally optically more stable than visible ink. In addition, regions of interest may appear in the A1 image in the visible range, but not in the infrared A2 image. The models shown in the reference images M1 and M2 can be shown from different viewing angles. In this case, it is assumed that a predetermined geometric transformation TM1M2 has been previously stored in the memory, taking account of deformations of the model shown in image M2 relative to the model shown in image M1. For example, when the two models shown in the images M1 and M2 are not distorted, this transformation TM1M2 is simply representative of a change of angle of view between the images M1 and M2. The image processing module 4 estimates a geometric transformation TM1A1 'taking into account deformations of the document shown in the acquired image A1, compared to the first model M1, the transformation depending on the second transformation TA1A2. More precisely, the transformation TM1A1 ’is obtained by composition of the inverse of the transformation TA1A2 (inverse that we denote TA2A1), of the transformation TM2A2, and of the transformation TM1M2 predetermined. TM1AÎ '= TM1M2 ο TM2A2 ο 7M2X1 The transformation TM2A2 is estimated from correspondences of points of interest of the image M2 (such as the point P6M2 represented in FIG. 12) with points of interest extracted in the acquired image A2 (such as the point P5A2 represented in FIG. 12), in the same way as for the transformation TM1A1. Points P6A2 and points P4A2 (requested to obtain the TA1A2 transformation) can be the same or different. Like the TM1A1 transformation, the TM1A1 ’transformation thus estimated takes account of deformations of the structured document shown in the first acquired image A1 compared to the model shown in the first reference image M1. However, as illustrated diagrammatically in FIG. 12, this transformation TM1A1 'gives information of deformations complementary to those given by the transformation TM1A1, and this because of the fact that the correspondences of points of interest between the images A1 and A2 do not are not the same as the points of interest correspondences between the images M1 and A1. In other words, in this embodiment, the images M2 and A2 have been used to characterize more precisely the deformations of the structured document as shown in the image A1 compared to the model shown in the reference image A1 . C.4. Other embodiment of the method using several reference images and several acquired images The characteristics described in parts C.2 and C.3 can be combined within the process; in particular, it can be envisaged that the correspondences made between the images A1 and A2 are used not only for implementing the analysis of the image A2 during step E15, and this by means of the transformation TM1A2 ( see part C.2), but also to improve the analysis of the image A1 carried out during step E10, by calculating the transformation TM1A1 '(see part C.3). In both cases, the image considered as target image is different. Ultimately, the additional deformation information obtained from the image A2 can not only be used during step E14 to locate with precision a region of interest in the image A2, but also serve to improve the quality of the analysis E10 carried out on the visible image A1. Furthermore, the principle described in part C.3 can be generalized for any acquired image Ai other than the image A1, on the assumption that a reference image Mi other than the reference image M1 is stored in the memory 14. A transformation TMlAl ^ can then be obtained taking account of deformations of the structured document shown in the acquired image A1 with respect to the model shown in the reference image M1, obtained in the following manner: TMIAI® = TMIMi ° TMiAi ° TAiAl Consequently, each additional TMIAl® transformation can be used in addition to the TM1A1 transformation during step E9 of determining one or more regions of the image A1 with a view to analyzing their content during the 'step E10. This principle can also be generalized to any acquired image Aj considered as a target image. The image processing module 4 can thus determine at least one region to be analyzed in the target image Aj by projection of a reference region of the reference image Mj by means of a geometric transformation TMjAj, but also at means of at least one other geometric transformation obtained in the following way: TMjAj® = TMjMi o TMiAi ° TAiAj where i and j are different.
权利要求:
Claims (15) [1" id="c-fr-0001] 1. Method for analyzing the content of a structured document liable to be distorted, implemented from a first acquired image (A1) (E1) and a second acquired image (A2) (E1 ') each showing the document, and including steps to: • matching (E6) of first points of interest (P1A1) extracted in the first acquired image (A1) with second points of interest (P2M1) of a first reference image (M1) showing a first model of the document, • estimation (E8) of a first geometric transformation (TM1A1) taking into account deformations of the structured document shown in the first image acquired compared to the first model, from the correspondences made in the previous step, • determination (E9 ) of at least a first region to be analyzed in the first acquired image (A1), by projection of at least one reference region of the first reference image by means of the first transformation (TM1A1), • analysis (E10) the content of the first determined region, the method being characterized in that it further comprises steps of: • mapping (E12) of third points of interest (P3A2) extracted in the second acquired image (A2) with fourth points of interest (P4A1) extracted in the first acquired image (A1), • estimation (E13) , from the correspondences made in the previous step (E12), of a second geometric transformation (TA1A2) taking into account deformations of the document shown in the second acquired image (À1) compared to the document shown in the first acquired image ( A2), • estimation of a third geometric transformation taking into account deformations of the document shown in one of the two acquired images (A1, A2), called the target image, relative to the first model shown in the first reference image (M1 ), the third transformation depending on the second transformation (TA1A2), • determination of at least one second region to be analyzed in the target image (A1 or A2) by projection of the r reference region of the first reference picture using the third geometric transformation, • analysis (E10 or E14) the contents of the second specified region. [2" id="c-fr-0002] 2. Method according to one of the preceding claims, in which: • the first image (A1) is acquired (E1) while the document is lit by light radiation in a first wavelength band, • the second image (A2) is acquired (E1 ') while the document is illuminated by light radiation in a second band of wavelengths, • the second band of wavelengths is different from the first band of wavelengths or the second image is acquired after the first image. [3" id="c-fr-0003] 3. Method according to the preceding claim, in which • the first band of wavelengths is in the visible range, and • the second band of wavelengths is in the infrared range, for example the near infrared range, or in the ultraviolet domain. [4" id="c-fr-0004] 4. Method according to the preceding claim, also implemented from a third acquired image (A3) (E1 ”) while the document is illuminated by light radiation in a third band of wavelengths different from the second. wavelength band, and in which; • the second band of wavelengths is in the infrared range, • the third band of wavelengths is in the ultraviolet range, • determination (E16) of at least a third region to be analyzed in the third image (A3 ), • analysis (E17) of the content of the third determined region. [5" id="c-fr-0005] 5. Method according to one of the preceding claims, in which the acquired images (A1, A2, A3) show the document from different viewing angles. [6" id="c-fr-0006] 6. Method according to the preceding claim, in which the acquired images (A1, A2, A3) are acquired (E1, ΕΓ, E1 ”) successively by means of the same objective, or else are acquired by distinct objectives. [7" id="c-fr-0007] 7. Method according to one of the preceding claims, further comprising: • for a plurality of predetermined regions of interest of the first reference image, estimation of a local geometric transformation specific to the region of interest, from the second points of interest (P2M1) located in the region of interest and first points of interest (P1A1) with which these second points of interest (P2M1) have been mapped, and in which the first geometric transformation is estimated (E8) also from the local geometric transformations specific to the regions d interest of the first reference image. [8" id="c-fr-0008] 8. Method according to one of the preceding claims, further comprising a fusion (E18) of the results provided from each analysis step. [9" id="c-fr-0009] 9. Method according to the preceding claim, in which the fusion (E18) comprises an authentication of the document, in which the authentication succeeds provided that information is found during the analysis of at least one of the acquired images ( A1, A2, A3), and not found during the analysis of at least one other of the acquired images (A1, A2, A3). [10" id="c-fr-0010] 10. Method according to the preceding claim, in which each analysis (E10, E15, E17) comprises a recognition of characters, and in which the fusion (E18) comprises an arbitration between the characters recognized during the stages of analysis. [11" id="c-fr-0011] 11. Method according to one of the preceding claims, in which the target image is the second acquired image (A2) and in which the third transformation is a composition of the first geometric transformation (TM1A1) and of the second geometric transformation (TA1A2 ). [12" id="c-fr-0012] 12. Method according to one of claims 1 to 10, comprising steps of • matching of fifth points of interest (P5A2) extracted in the second acquired image (A2) with sixth points of interest (P6M2) d 'a second reference image (M2) showing a second model of the document, • estimation, from the correspondences made in the previous step, of a fourth geometric transformation (TM2A2) taking into account deformations of the document shown in the second image acquired compared to the second model, and in which: • the target image is the first acquired image (A1), • the third transformation is a composition of the inverse (TA2A1) of the second transformation (TA1A2), of the fourth transformation (TM2A2), and of a fifth transformation geometrical (TM1M2) predetermined taking into account deformations of the second model compared to the first model. [13" id="c-fr-0013] 13. Method according to the preceding claim, further comprising a step of • for a plurality of region of interest of the second reference image, estimation of a local geometric transformation specific to the region of interest, from the sixth points points of interest (P6M2) located in the region of interest and fifth points of interest (P5A2) with which these fifth points of interest (P6M2) have been mapped, in which the fourth transformation (TM2A2) is estimated also from the local geometric transformations specific to the regions of interest of the second reference image. [14" id="c-fr-0014] 14. Product computer program comprising code instructions for the execution of an analysis method according to one of the preceding claims, when this program is executed by a processor. [15" id="c-fr-0015] 15. Device for analyzing the content of a structured document liable to be deformed, the device comprising an interface for receiving a first image (A1) showing the document and a second image (A2) showing the document, and an image processing module (4) configured for: • matching first points of interest (P1A1) extracted in the first acquired image (A1) with second points of interest (P2M2) of a first reference image (M1) showing a first model of the document, • estimate of a first geometric transformation (TM1A1) taking into account deformations of the structured document shown in the first image acquired compared to the first model, from the correspondences made in the previous step, • determine at least one first region to analyze in the first acquired image (A1), by projection of at least one reference region of the first reference image by means of the first transformation (TM1A1), • analyze the content of the first determined region, the device being characterized in that that the image processing module (4) is further configured to: • to match third points of interest (P3A2) extracted in the second acquired image (A2) with fourth points of interest (P4A1) extracted in the first acquired image (A1), • to estimate, from the correspondences made in the previous step, a second geometric transformation (TA1A2) taking into account deformations of the document shown in the second acquired image (A1) compared to the document shown in the first acquired image (A2), 5 · estimate a third geometric transformation taking into account deformations of the document shown in one of the two acquired images (A1, A2), called the target image, with respect to the first model, the third transformation depending on the second transformation (TA1A2), • determine at least a second region to analyze in the target image (A1 or A2) 10 by projection of the reference region of the first reference image by means of the third geometric transformation, • analyze the content of the second determined region. 1/9 τ ·. Mlljllll »ί: Ι · IÎ ... 1. L! Û, '<r; 7' '’·’. ·: Σ "î." Rr X. '. WINE NaSonaiity Uaïe oïbirtb // 04/2003 i ·,> 5ΐ ·: ·!! Ϊ́ ίο I £ Η091382 F Ι. ”Γ-“ fl. ·! ΗϊΟΙΊΛ ’î" · o "oc 03/08/231 1 l.-lf o 'W-f C2'G3.'2O2. ; 1ADOHATÏEW <<! '. EVIN <<<<<<<<<< U «<« «<· cHû91 3» Z9BGR0304122ί ·; 2 10802 5 <<<<, <<<<<<<<
类似技术:
公开号 | 公开日 | 专利标题 EP3382605A1|2018-10-03|Method for analysing a deformable structured document EP2689400B1|2021-04-28|Method and system to authenticate security documents FR2932588A1|2009-12-18|METHOD AND DEVICE FOR READING A PHYSICAL CHARACTERISTIC ON AN OBJECT CA2908210A1|2016-04-10|Identification process for a sign on a deformed image EP3388976A1|2018-10-17|Method for detecting fraud EP3401837A1|2018-11-14|Device for capturing fingerprints FR3053501A1|2018-01-05|METHOD OF AUTHENTICATING DOCUMENTS USING A MOBILE TELECOMMUNICATION TERMINAL FR3086884A1|2020-04-10|METHOD FOR DETECTION OF DOCUMENTARY FRAUD. EP3210166B1|2020-04-29|Method of comparing digital images FR3068807A1|2019-01-11|METHOD FOR PROCESSING AN IMAGE SHOWING A STRUCTURAL DOCUMENT COMPRISING A VISUAL INSPECTION ZONE FROM AN AUTOMATIC READING AREA OR BAR CODE TYPE FR3055446A1|2018-03-02|METHOD AND SYSTEM FOR GENERATING THE SIGNATURE OF A SURFACE EP3567521A1|2019-11-13|Iris biometric recognition method FR3053500A1|2018-01-05|METHOD FOR DETECTING FRAUD OF AN IRIS RECOGNITION SYSTEM CA2982878A1|2016-10-20|Method for verifying a security device comprising a signature EP2073175A1|2009-06-24|Secure identification medium and method of securing such a medium FR2986890A1|2013-08-16|METHOD FOR INSERTING A DIGITAL MARK IN AN IMAGE, AND CORRESPONDING METHOD FOR DETECTING A DIGITAL MARK IN AN IMAGE TO BE ANALYZED EP3153991A1|2017-04-12|Method for analysing a content of at least one image of a deformed structured document RU2739059C1|2020-12-21|Authentication method of marking EP3579141A1|2019-12-11|Method for processing a fingerprint impression image FR3047832B1|2019-09-27|METHOD FOR DETERMINING A COLOR VALUE OF AN OBJECT IN AN IMAGE JP2019204293A|2019-11-28|Forgery determining method, program, recording medium and forgery determining device WO2018142054A1|2018-08-09|Method for verifying the authenticity of a sensitive product. EP3459755A1|2019-03-27|Method for inserting guilloche patterns, method for extracting guilloche patterns, authentication method for said guilloche patterns and implementation devices WO2020144225A1|2020-07-16|Method for processing digital images FR3106428A1|2021-07-23|Process for processing a candidate image
同族专利:
公开号 | 公开日 US10482324B2|2019-11-19| FR3064782B1|2019-04-05| US20180285639A1|2018-10-04| AU2018202324A1|2018-10-18| CA3000153A1|2018-09-30| EP3382605A1|2018-10-03|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US20050078851A1|2003-09-30|2005-04-14|Jones Robert L.|Multi-channel digital watermarking| US20090059316A1|2007-08-29|2009-03-05|Scientific Games International, Inc.|Enhanced Scanner Design| DE102013101587A1|2013-02-18|2014-08-21|Bundesdruckerei Gmbh|METHOD FOR CHECKING THE AUTHENTICITY OF AN IDENTIFICATION DOCUMENT| US20160350592A1|2013-09-27|2016-12-01|Kofax, Inc.|Content-based detection and three dimensional geometric reconstruction of objects in image and video data| US6606421B1|2000-05-25|2003-08-12|Hewlett-Packard Development Company, L.P.|Geometric deformation correction method and system for dot pattern images| US7190834B2|2003-07-22|2007-03-13|Cognex Technology And Investment Corporation|Methods for finding and characterizing a deformed pattern in an image| US8144921B2|2007-07-11|2012-03-27|Ricoh Co., Ltd.|Information retrieval using invisible junctions and geometric constraints| FR2952218B1|2009-10-30|2012-03-30|Sagem Securite|METHOD AND DEVICE FOR OBTAINING AN IMAGE OF A DOCUMENT DEFROSSE FROM AN IMAGE OF THIS DOCUMENT WHEN IT IS FROZEN| EP2320390A1|2009-11-10|2011-05-11|Icar Vision Systems, SL|Method and system for reading and validation of identity documents| US8472726B2|2011-01-07|2013-06-25|Yuval Gronau|Document comparison and analysis| WO2012137214A1|2011-04-05|2012-10-11|Hewlett-Packard Development Company, L. P.|Document registration| EP3428882B1|2012-10-26|2021-01-06|Brainlab AG|Matching patient images and images of an anatomical atlas| RU2643465C2|2013-06-18|2018-02-01|Общество с ограниченной ответственностью "Аби Девелопмент"|Devices and methods using a hierarchially ordered data structure containing unparametric symbols for converting document images to electronic documents| US9922247B2|2013-12-18|2018-03-20|Abbyy Development Llc|Comparing documents using a trusted source| RU2571378C2|2013-12-18|2015-12-20|Общество с ограниченной ответственностью "Аби Девелопмент"|Apparatus and method of searching for differences in documents| FR3027136B1|2014-10-10|2017-11-10|Morpho|METHOD OF IDENTIFYING A SIGN ON A DEFORMATION DOCUMENT|US10311556B1|2018-07-02|2019-06-04|Capital One Services, Llc|Systems and methods for image data processing to remove deformations contained in documents| FR3086884B1|2018-10-09|2020-11-27|Idemia Identity & Security France|DOCUMENTARY FRAUD DETECTION PROCESS.| CN109829437A|2019-02-01|2019-05-31|北京旷视科技有限公司|Image processing method, text recognition method, device and electronic system| FR3104771B1|2019-12-11|2021-12-24|Idemia Identity & Security France|Method, device and computer program product for decoding a game slip| FR3104772A1|2019-12-13|2021-06-18|Idemia Identity & Security France|Document analysis terminal and document analysis process|
法律状态:
2018-02-19| PLFP| Fee payment|Year of fee payment: 2 | 2018-10-05| PLSC| Publication of the preliminary search report|Effective date: 20181005 | 2020-02-20| PLFP| Fee payment|Year of fee payment: 4 | 2020-05-01| CA| Change of address|Effective date: 20200325 | 2020-05-01| CD| Change of name or company name|Owner name: IDEMIA IDENTITY AND SECURITY, FR Effective date: 20200325 | 2021-02-19| PLFP| Fee payment|Year of fee payment: 5 | 2022-02-21| PLFP| Fee payment|Year of fee payment: 6 |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 FR1752725|2017-03-30| FR1752725A|FR3064782B1|2017-03-30|2017-03-30|METHOD FOR ANALYZING A STRUCTURAL DOCUMENT THAT CAN BE DEFORMED|FR1752725A| FR3064782B1|2017-03-30|2017-03-30|METHOD FOR ANALYZING A STRUCTURAL DOCUMENT THAT CAN BE DEFORMED| CA3000153A| CA3000153A1|2017-03-30|2018-03-29|Analysis process for a structure document capable of being deformed| US15/940,908| US10482324B2|2017-03-30|2018-03-29|Method for analyzing a structured document likely to be deformed| EP18165339.5A| EP3382605A1|2017-03-30|2018-03-30|Method for analysing a deformable structured document| AU2018202324A| AU2018202324A1|2017-03-30|2018-04-03|Method for analyzing a structured document likely to be deformed| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|