专利摘要:
computer-readable method, apparatus and non-transitory medium to allow a user to paint an image a front end screen facing the user of a digital painting application of an illustrative image includes a bucket tool that allows a user to apply a color selected for a selected area of a loaded digital image, such as a wall in a room, a brush tool to fill unmet areas, a rubber tool is for removing misapplied color, mask tools for masking outside areas selected, and a tolerance slider tool to help properly fill in painted areas. improved preprocessing image methods allow for better definition of areas of the image to be painted the same color.
公开号:BR102016005244A2
申请号:R102016005244-0
申请日:2016-03-09
公开日:2020-09-24
发明作者:Damien Reynolds;Douglas Milsom;Vincent Giosa
申请人:Behr Process Corporation;
IPC主号:
专利说明:

[0001] [0001] This application claims priority benefit for US Provisional Patent Application No. 62 / 134,250, filed March 17, 2015, and entitled "Paint Your Place Application for Optimizing Digital Painting of an Image", the content of which is incorporated herein by this reference in its entirety. FIELD
[0002] [0002] Disclosure of the subject relates to automated color selection and coordination systems and more particularly to such a system that allows a user to visualize how a selected color would appear in a photographic image loaded from an environment or other area that can be painted. The disclosure of the subject also refers to methods and devices for digital painting of such a loaded image, as well as methods and devices to optimize such procedures. RELATED TECHNIQUE
[0003] [0003] Automated color selection and coordination systems are disclosed, for example, in the publication of US patent No. 2012/0062583, published on 15 2012, entitled, "Data Driven Color Coordinator", attributed to Behr Process Corporation, Santa Ana, California. SUMMARY
[0004] [0004] According to an illustrative modality, a front screen facing the user of an illustrative image digital painting application includes a bucket tool, a brush tool, an eraser tool and a masking tool. In an illustrative way, the paint bucket tool allows a user to apply a selected color to a selected area of an image loaded as the wall of a room, thus simulating the painting of the room with the selected color and allowing a user to view , for example, the user's own environment would be similarly painted with the selected color. In an illustrative modality, the paint brush tool is used to fill an area that was lost during the application of the color selected for the first area, the eraser tool is used to remove the color that dripped into an unwanted area during application of the selected color, and the mask tool is used to mask an area that should not be painted. In an illustrative embodiment, two mask tools can be employed to apply a linear mask or a polygonal mask.
[0005] [0005] In an illustrative embodiment, a sliding tolerance tool is also provided to assist in properly filling the painted areas. The tolerance slider tool can allow the user to increase or decrease the painted area. In one embodiment, the tolerance slide tool display comprises a darkened area in a right triangle that can be pulled to the right or left to either increase or decrease the painted area.
[0006] [0006] Another aspect of the present disclosure concerns a method of preprocessing a loaded digital image before "painting" the image in which a bilateral smoothing algorithm is performed in order to remove noise from flat surfaces in the image. maintaining the integrity of the edges and color differences. Sobel and Canny's edge detection algorithms are then run against the image and the image data resulting from running the Canny algorithm are stored separately from the image data resulting from running the Sobel algorithm. A flood fill algorithm is then performed on Sobel's image data to segment the image into areas of segments that have the same color, in which the flood fill algorithm is modified to take into account the natural gradient of the Sobel algorithm, thus allowing the definition of one or more tolerances for the definition of the edges of the image. If an edge of a segment determined by applying the flood fill algorithm and the Sobel algorithm is close to a Canny edge, the ink color assigned to that segment is pulled to the edge to give Canny vivid edges.
[0007] [0007] According to an illustrative modality, the pixel color of an area identified by segmentation is averaged over the area as a whole. Then, the average color of the area's pixels is iterated over previously found segments to determine if the area has the same or similar pixel color as the previously found average of the segments, and if it does, the area is associated with the previously found color. All associated segments can be averaged out to set a base luminance for the same color across multiple segments, and a segment under analysis with its overall average color can then be stored for future calculation. DESCRIPTION OF THE DRAWINGS
[0008] [0008] Figure 1 is a block diagram of an illustrative system for the implementation of an application, paint your location according to an illustrative modality;
[0009] [0009] Figure 2 is a block diagram of an alternative illustrative system for implementing an application, paint your location according to an illustrative modality;
[0010] [0010] Figure 3 is an illustrative screen of a customer facing paint selection and a coordinating system for an illustrative modality;
[0011] [0011] Figure 4 is a second illustrative screen of a customer facing paint selection and a coordinating system for an illustrative modality;
[0012] [0012] Figure 5 is a third illustrative screen of a customer facing paint selection and a coordinating system for an illustrative modality;
[0013] [0013] Figure 6 is a fourth illustrative screen of a customer facing paint selection and a coordinating system for an illustrative modality;
[0014] [0014] Figure 7 is a fifth illustrative screen of a customer facing paint selection and a coordinating system for an illustrative modality;
[0015] [0015] Figure 8 is a sixth illustrative screen of a customer facing paint selection and a coordination system for an illustrative modality;
[0016] [0016] Figure 9 is a seventh illustrative screen of a customer facing paint selection and a coordination system for an illustrative modality;
[0017] [0017] Figure 10 is a canvas that allows the user to select an image for the painting;
[0018] [0018] Figure 11 is a screen that allows a user to adjust the placement of a selected image;
[0019] [0019] Figure 12 is a screen allowing a user to see what percentage a selected image has been loaded;
[0020] [0020] Figure 13 is a canvas that allows the user to select a characteristic bucket of paint to paint the selected image;
[0021] [0021] Figure 14 is a screen, which illustrates the use of the paint bucket and an associated cursor tool;
[0022] [0022] Figure 15 is another screen that illustrates the use of the paint bucket and an associated slider tool;
[0023] [0023] Figure 16 is a canvas that illustrates the use of a brush tool;
[0024] [0024] Figure 17 is another canvas that illustrates the use of a brush tool;
[0025] [0025] Figure 18 is a screen that illustrates the use of a rubber tool;
[0026] [0026] Figure 19 is another screen that illustrates the use of a rubber tool;
[0027] [0027] Figure 20 is a screen, which illustrates the use of a first masking tool;
[0028] [0028] Figure 21 is another screen that illustrates the use of a first mask tool;
[0029] [0029] Figure 22 is a screen that illustrates the use of a second "polygonal" mask tool;
[0030] [0030] Figure 23 is another screen that illustrates one of a second "polygonal" mask tool;
[0031] [0031] Figure 24 is another screen illustrating the use of a second "polygonal" mask tool;
[0032] [0032] Figure 25 is another screen illustrating the use of a second "polygonal" mask tool;
[0033] [0033] Figure 26 is a canvas is a canvas that illustrates return to the image without original painting;
[0034] [0034] Figure 27 is a screen that further illustrates the paint bucket tool;
[0035] [0035] Figure 28 is a canvas that further illustrates the brush tool;
[0036] [0036] Figure 29 is an illustrative flowchart of pre-processing an image according to an illustrative modality;
[0037] [0037] Figure 30 shows an illustrative loaded image;
[0038] [0038] Figure 31 shows the image of Figure 28 after applying a Sobel algorithm after dilation and erosion, according to an illustrative modality;
[0039] [0039] Figure 32 illustrates the results of applying a Canny algorithm to the image of Figure 28 according to an illustrative modality;
[0040] [0040] Figure 33 illustrates the segmentation of the image of Figure 28 according to an illustrative modality;
[0041] [0041] Figure 34 is a first portion of a flowchart that illustrates an image painting according to an illustrative modality;
[0042] [0042] Figure 35 is a second part of the flow chart of Figure 34; and
[0043] [0043] Figure 36 is a third part of the flow chart of Figure 34. DETAILED DESCRIPTION
[0044] [0044] Figure 1 illustrates a block diagram of a system in which several remote computers 300 can access a painting color selection and coordination website 301, which in one mode can provide or download a Paint Your Place application for end users as described below. Site 301 can be coupled to Internet 303 in order to provide access to a large number of remote terminals / computers 300, for example, at the end users' home locations. Each remote computer 300 controls a display device 305, which may comprise, for example, one or more CRTs or flat panel computer monitors or monitors.
[0045] [0045] Site 301 may comprise a server engine 309 comprising one or more computers or servers, associated memory 317 and server software such as a server operating system and server application programs. In one embodiment, site 301 is arranged to store and transmit a plurality of related documents or web pages 311 in digital format, for example, such as HTML documents, and may also include a color database 315 in that color data is stored as described, for example, in US Patent No. 7,999,825, entitled "Color Selection and Coordination System", hereby incorporated by reference in its entirety. It will be appreciated that, in one embodiment, the computer-controlled visualization device transforms web pages in digital format into static and / or animated interactive visual images for an end user. Associated memory 317 may comprise a computer of readable media by digital or medium storage, such as, for example, hard disk storage.
[0046] [0046] A user can interact with the website 301 through the Internet 303 or other means of communication or media through selection operations carried out on screens displaying web pages presented to the user through the 305 display device of a remote computer 300. Such selection operations can be performed by, for example, a keyboard, mouse, track ball, touchscreen or other means of data entry. In such a way, several links presented on the fixture 305 can be selected by several points and clicks, and point of contact, or other selection operations.
[0047] [0047] In various modalities, remote computers 300 may comprise or be part of a computer terminal, a personal digital assistant (PDA), a cordless phone, a "smartphone", a laptop, desktop or notebook computer, and / or similar. In various embodiments, the communication medium or means of communication may comprise a local area network (LAN), a wide area network (WAN), a wireless network, an intranet, the Internet, and / or the like.
[0048] [0048] In one embodiment, the functionality of the website can be implemented in software stored on a computer-readable storage medium or media and executed by a suitable device, for example, such as one or more digital processors or computers, which may comprise part of a network server or other suitable device. In other embodiments, such software may be located on a personal computer or other similar device that has a flat panel display, or another display device at a user's location without the involvement of a server or the Internet. In this case, display screens are generated that can have the same content as web pages, so the terms "web page", "screen", "preview" and similar terms are used here interchangeably. Illustrative screens of monitors and functionality of an illustrative modality can be implemented in one or more application programs, which can be written in, for example, HTTP, PHP, MySQL, JavaScript, XMPP Server, Solr Server, LAMP stack technology, Java , Laszlo Presentation Server or C ++ and, which can be run, for example, on a Windows XP or other operating system. Various screens and features of the illustrative embodiments are described below.
[0049] [0049] Another illustrative modality of a 401 site for providing content to an end user, as described below is shown in Figure 2. Site 401 employs the first and second load balancers 403, 405, which communicate with a pair of 407, 409 web servers, for example, as Apache web servers. The 407, 409 web servers communicate more with five application servers (Jboss) 411, 413, 415, 417, 419, which are organized to access a database comprising digital storage media and a 421 database server. in addition, application servers, for example, 411, can communicate via a load balancer 423 with first and second Autonomy Servers 425, 427.
[0050] [0050] The operation of the system in Figure 2 can be illustrated as follows. The end user opens a browser on his computer, for example, 301, and enters an order to visit http://www.behr.com. This request reaches the load balancers of two Apache 407, 409 web servers. One of the load balancers, for example, 407, passes the request to one of the two Apache 407, 409 web servers. The Apache web server, for example, 409, analyzes the request and determines whether it can be handled locally, that is, checks to see if the object exists in the server's document root. Any part of the request that can be fulfilled by the Apache 409 server is typically static content, that is, .png, .jpg, .swf, .css, .js, .html, .txt, residing in digital storage on the 409 server. portion of the request that cannot be served by the Apache 409 server is passed back to the Jboss server, for example 411, for example, configured context roots, dynamic content and processing requests, such as a log-on event. The application server 411 then processes the portion of the request transmitted to it. If additional data is required from the 421 database, for example, a username, password or workbook, the 411 application server retrieves the data from the 421 database. The 411 application server, in it then sends the processed data backwards through the web server 409 to the client residing on the end user's computer 406, in this case, the web browser 408. The web browser 408 gathers the data again and processes the page in the browser , which causes the display on the display device 410 of the user's computer 406. The servers then wait for the next request.
[0051] [0051] In one embodiment, in response to the user's initial access, a website request (client) is transmitted to the user's computer, for example, 406 and is executed in the user's computer browser. In one embodiment, the website application is a SWF application that controls flash player animation on the user's display, for example, as the various features of animating in and out or fade in or fade out. The actual content of a given page is pulled dynamically from the server system in response to ("clicking") user selection operations. The source data of the web server that make up the XML code, which define the active content to be displayed, with the user's flash player together with the static content, for example, a "home page" image project, for example, the format HTML.
[0052] [0052] Thus, for example, when the user selects a home page, the website application accesses the server system, which provides, for example, a project image, hot spot locations, colors to display in connection with any hot functionality spot, drop-down elements (menus) and instructs the SWF application what to build.
[0053] [0053] An illustrative modality 11 of a home page for a selection of paint colors aimed at the client and the coordinating site is shown in Figure 3. As shown in Figure 3, a user has selected "Colors" in section 13 in drop-down navigation menu 15 on the web page 11. Selecting the "colors" from link 13 reveals "the paint colors", "spot colors," and "Mobile Apps" from links 15, 16, 17, 18.
[0054] [0054] When clicking on the painting colors of link 16, the user is taken to the ColorSmart Figure 4 visualization page, where the user can select a color with which to paint it. More details on the structure and functioning of such page and related pages are disclosed in US Patent Publication 2014/0075361 A1, entitled, "Automated Color Selection Method and Apparatus with Compact Functionality", which is incorporated in its entirety by this reference.
[0055] [0055] After the user selects a color, for example, "Timeless Ruby" in the display of Figure 4, the user clicks on the link 19 "preview of the colors of the painting" in the display of Figure 5. The user is then taken for the screen in Figure 6, which provides an overlay 20, which offers the possibility to choose from a list of pre-generated rooms, for example, 21, 22, 23 to which the user can decide to add the paint color.
[0056] [0056] Then the user clicks on "My Custom Images" tab 27 on the upper right side of the overlay 20. By clicking on the My Personalization 27 tab, the user will see the overlay change for an introduction 28 to the Paint Your app Place shown in Figure 7. The user clicks the "Continue" link 29 to enter a Paint Your Place procedure, allowing the user to upload an image, such as an image from a photograph.
[0057] [0057] If the user has submitted images before, the uploaded images will be presented - see images 31, 32 in Figure 8. If no image has been uploaded, the user will only see the button 35 "upload photo", which is selected to start the procedure to upload photos. When the user clicks the orange button 35 to upload a photo, an overlay 26 shown in Figure 9 opens image files 39 on the user's computer which allows the user to choose which image files the user would like to upload, following the procedure in Figure 10.
[0058] [0058] After the user has selected an image, the image is then loaded as shown in Figure 10, for example, by clicking and holding to grab the image and, later, dragging the image. The screen in Figure 11 allows the user to make adjustments such as moving the image 39 or rotating it. Once the user has settled on the desired positioning of the uploaded image 39, the user clicks on the "Paint Your Place" button 40 located at the bottom right of the overlay to process and send the image to the Paint Your Place application described in detail below. .
[0059] [0059] After clicking on the Paint Your Place button 40 in Figure 11, the user sees a loading bar 41 in real time, Figure 12. This bar 41 informs the user what percentage of their image 39 has been processed and sent to the Paint Your Place app. When image 39 has just been processed, the user is taken to the screen in Figure 13 showing the image of user 39, the color (s) 45 that the user has selected, a number of color palettes, for example, 46, 47, each palette including the selected color 45 and three different colors that coordinate with them, and an overlay 48 prompting the user to click on a color and click on a wall. The screen in Figure 14 also includes buttons or icons for selecting a brush tool 52, a rubber tool 59, and a mask tool 61. The paint bucket tool is self-selected when entering the application, allowing the user to click on a selected wall, for example 51, with a mouse cursor directed 49, and add a selected color, for example 45, to wall 51 in the image of room 39, as shown in Figure 14, where the selected color 45 is indicated by a crossed line on the wall 51. The user can, of course, use the bucket tool to add color to other areas or objects in the room depicted in the image 39.
[0060] [0060] Figure 14 also illustrates a sliding spread of tool 54. The use of sliding spread 54 reduces or increases the amount of paint added to the clicked area and increases the radius of the paint applied to surface 51, allowing the user to fill in the areas unpainted, for example, 53 as shown in Figures 14 and 15. In the mode of Figures 14 and 15, the mouse cursor pulls dark area 55 ("slider") to the right within triangle 56 to increase the painted area and fill unpainted areas, for example, like area 53.
[0061] [0061] With respect to all areas that were not addressed using the paint bucket tool, the user can select the brush tool 52, as illustrated in Figure 16. The user can change the size of the area that the brush 52 will cover by selecting one of the highlighted circles or "brush sizes" 53. After selecting the desired brush size, the user can paint the areas, for example, 60, 62 in Figure 16, that have been missed by the bucket tool. paint to give a more complete appearance as shown in Figure 17. If the paint is splashed into an unwanted area, for example, pillow 57 in Figures 17 and 18, the user can click on the rubber icon 59 and select a size of the eraser and, for example, click on the desired area to remove the paint from the unwanted area, to achieve the result shown in Figure 19.
[0062] [0062] If the user is having a problem with paint dripping in unwanted areas and would like to remove a section of an area, the user can click on the mask tool 161, Figure 20. There are two types of mask tools: one it is a line tool that allows a user to block an area with a straight line 58 (Figure 20) and then paint, for example, with the brush 52, without worrying that the paint will drain into area 158, which the user masked, see Figure 21.
[0063] [0063] The other mask tool is a polygon tool 62, as shown in Figures 22 and 23, which allows users to mask areas that the straight line tool 61 is not able to accurately mask. The polygon tool 62 draws a new line, for example, 63 (Figure 22), 64, 65, 66 (Figure 23) every time the user clicks and moves the mouse. The polygon tool 62 will stop making new lines when the user connects the last desired line 66 with the point where the initial click was made. In one embodiment, this connection point may appear as a small circle on the original 63 line.
[0064] [0064] The user can then add color to area 7 0 outside or inside the polygon, as illustrated in Figure 24. The user can click on a "Hide Mask" icon 72, Figure 25, to hide the polygonal mask to see what image 70 looks like. Even if the polygonal mask is hidden, it still blocks the paint from dripping in unwanted areas, for example, 71. Or the user can click on a "Remove All" icon 78 to completely remove a mask. In one embodiment, removing a mask removes the barrier that was created by the mask, and any additional paint added can now move to the previously masked area. To start from the beginning with the same image sent, the user can click on a Despinte a Imagem icon or link 73, which returns the painted image the user started with as shown in Figure 26.
[0065] [0065] As discussed above, the front screen presented to the user includes a "bucket" tool, a "brush" tool 52 and a "mask" tool 61, 62. A selection tool size of 53 is provided to allow for selection of different sizes of "brushes" for the brush tool 52. A tolerance selector 54 is also provided to select among the various tolerances, which allows a larger or smaller area to be painted. In one embodiment, the bucket tool can be represented by a mouse icon bucket icon 49 on a room screen, as shown in Figure 27, and the brush tool can be represented by a movable circle 50, whose radius corresponds to the size of the selected brush as shown in Figure 28.
[0066] [0066] As can be appreciated, various illustrative modalities of an automated method, apparatus or computer of readable non-transitory medium or means to allow a user to paint an image sent for display on a computer controlled display device can comprise anyone or more than a brush painting tool, a rubber tool, the first and / or second mask tools or a sliding tolerance tool as described above.
[0067] [0067] One aspect of the illustrative modality involves pre-processing the image sent, for example, 39, to determine in advance the areas that should be painted in the same color, and taking or storing the results for later use during painting operations. users using the tools described above. According to illustrative modalities, very different image processing techniques are combined to better define such areas, including a new combination of Canny and Sobel's detection algorithms. The use of the Sobel algorithm allows you to create a tolerance slider to assist in defining, for example, the edges of the wall so that there is no color running in undesirable areas. According to the illustrative framework, the Canny algorithm is employed to straighten edges that the Sobel algorithm is unable to straighten and fill in the gaps left by Canny edge detection. The overall result is a more accurate color rendering of an environment as painted by the user.
[0068] [0068] According to an illustrative modality, the processing steps are implemented to preserve the relative intensity (shading) of the original image and apply it to the newly painted image, providing a much more realistic way of rendering. Another innovative aspect of the illustrative system is that the image processing application runs on the client side, without the intervention of a server, which results in a much faster and more responsive processing. All the intensive calculations relatively described below can be performed by an application written using, for example, Javascript, a relatively simple browser language. Pre-processing of Paint Your Place Application
[0069] [0069] Pre-processing to determine in advance which areas of a loaded image should be painted the same color is illustrated by the flow diagram in Figure 29. When the image is first loaded, a bilateral smoothing algorithm is performed, step 101, in order to eliminate noise from flat surfaces, maintaining edge integrity and color differences. Then, a snapshot is taken of each pixel of brightness, placing the snapshot in a cache or other memory, step 103. Then, the Sobel and Canny edge detection algorithms are performed against the image, steps 105, 107. The results of running the Canny algorithm are stored separately from the results of running the Sobel algorithm on two separate screens. As discussed below, Canny data is used only for edge correction and stretching.
[0070] [0070] Once the edge detection algorithms are executed, the application iterations over the image, executing a flood fill algorithm in the Sobel data in step 109 to perform image segmentation in areas or "segments" that have the same color. The flood fill algorithm is modified to take into account the natural gradient of the Sobel algorithm, allowing the definition of a tolerance for defining the contours of the image. In particular, a standard flood fill algorithm only checks to see if pixels are identical. According to the illustrative framework, instead of running the flood fill algorithm on the image, it is executed on Sobel data to ignore color differences. As illustrated in Figure 31, Sobel data is a canvas filled with black, and walls or margins in the image are defined by white, which varies in intensity from 0 to 255. The weak edges are defined by a lower white value, and "tolerance" is the blank value that the process ignores. For example, a zero tolerance does not ignore any edges, while a "30" tolerance ignores edges that are "30" or below. In one mode, slider 54 (Figure 14) is a one-to-one game with the tolerance variable.
[0071] [0071] As it defines the zone of segments to be painted the same color, Canny edge detection is used to define straight lines, and, if an end of a segment determined by application of the flood fill algorithm and the Sobel algorithm is close to a Canny border, assigning paint colors so that the segment is pulled in order for the border to give sharp rather than flat edges, as illustrated in Figure 32. The specific function applied to do this is NearbyPixel (colorData , r, g, b, tolerance, x, y, cannyData) {// implement canny edge detection and having that also counts for Canny edges to straighten the edges.} and is presented as follows: var maxX = x + tolerance; if (maxX> W) maxX = W - 1; var minX = x - tolerance; if (minX <0) minX = 0; var maxY = y + tolerance; if (maxY> H) maxY = H - 1; var minY = y - tolerance; if (minY <0) minY = 0; var isNearby = false; var curi = ((y * (W * 4)) + (x * 4)); actual var = ((y * (W * 4)) + (x * 4)); for (var curX = minX; curX <= maxX; curX ++) { for (var curY = minY; curY <= maxY; curY ++) { var i = ((curY * (W * 4)) + (curX * 4)); if ((colorData [i] == r && colorData [i + 1] == g && colorData [i + 2] == b) | (cannyData [i]> 125)) { if (curX <maxX && curY <maxY && cannyData [curi] <125) { if (cannyData [i] <125) return 2; else return 1; } return true; } } } return false; } "Tolerance" as referred to in "NearbyPixel" in the source code of the function above is set to 6 pixels from the current pixel being scanned. The check function to see if there is a Canny line within 6 pixels of the current pixel and if so, determines that that pixel is "ok" to be painted. If a line is a weak Canny line, the process does not continue; if it is a strong Canny line, the process continues to check for more pixels. In an illustrative modality, a strong Canny line is defined as above 50% white.
[0072] [0072] Once an area is identified for segmentation, the pixel color of the area is averaged over the area as a whole in step 110 of Figure 29. Then, that average color of the area's pixels is iterated through segmentations previously found in step 111 to determine if it has the same or similar average pixel color as previously found segments. If this happens, the area is associated with the previously found color.
[0073] [0073] In one mode, if a user has already painted the image, the process checks against the previous area that was painted, giving that weight of color as a means of correcting errors. A typical user is likely to click on a similar color several times, so the process is responsible for this behavior so that the areas are not stained. For example, a user can paint a wall, but a brightly lit area cannot be caught, so when the user clicks on that area, the process uses the association with the wall that the user just painted to ensure that both areas they are working with the same constants so that the brightly lit area is not treated differently.
[0074] [0074] The color matching feature for the part of the code that assigns the pixel color to segments in step 111 employs a modified DE1994 algorithm that prioritizes brightness as a check and that also influences the size of the found areas (segments) by weight by size, since, for example, a larger wall is more likely to be the color of the main image, as a whole. The modified DE1994 algorithm limits the effect that luminosity has on deltaE and is defined in the following source code: ColorUtilities.colorCompareDE1994 = function (l1, a1, b1, l2, a2, b2) { var c1 = Math.sqrt (a1 * a1 + b1 * b1); var c2 = Math.sqrt (a2 * a2 + b2 * b2); var dc = c1-c2; var dl = l1-l2; var da = a1-a2; var db = b1-b2; var dh = Math.sqrt ((da * da) + (db * db) - (dc * dc)); var first = dl / 2; var second = dc / (1 + 0.045 * c1); var third = dh / (1 + 0.015 * c1); return (Math.sqrt (first * first + second * second + third * third) ); };
[0075] [0075] To give an overview of the pre-processing functions, each "segment" is an area completely surrounded by Sobel lines. For example, using Sobel data from the image processing of Figure 30, shown in Figure 31, each drawer 90, 91, 92 is its own segment with its own color. The process checks against the average color for each drawer 90, 91, 92 and since they are similar in color, assigns the process the same color for all drawers 90, 91, 92. These segments 90, 91, 92 are in average together to decide the common color for each drawer. Since the drawers are a smaller segment, as a whole, the process recognizes a lower weight. Wall 93, however, is a large segment and is of a single color (for example blue), as indicated by the crossed dashes, so that the left and right sides have a greater weight in the application of the deltaE algorithm, in such a way that range of accepting another blue area as a similar color is greater.
[0076] [0076] Once this color matching procedure is complete, the segment under analysis is performed through an expansion and then an erosion algorithm, steps 113, 115, to close gaps and harden edges. All associated segments are then averaged out in step 117 to define a base luminance for the same color in several segments:
[0077] [0077] In step 119, the segment is then cached with its general average color for future calculations. Pre-processed Image User Painting
[0078] [0078] The "painting" portion of the application employing a "paint bucket" is illustrated in Figures 34-36. When the paint bucket is called, step 123, the same flood fill algorithm used in the pre-processing operation described above is used to find the common areas of colors in the surrounding area, where the user clicked, step 125. In one mode, thirty different flood fills, each with a different tolerance, are performed against the same pixel area, step 127. The process starts with the lowest tolerance, already exceeding pixels found to optimize performance, but each of the thirty fills flood data are stored in their own matrix to be referenced later. When the process reaches the end of a flood fill for the edges, the previously detailed "isNearbyPixel" function is called, which fills in the gaps between Canny edges and other areas already painted with the user of the selected color and instructs the fill algorithm by flood to move beyond Sobel tolerance, step 129.
[0079] [0079] In step 129, in addition to moving the pixels to Canny lines during the procedure, the pixels are also moved closer to the masking lines. In one embodiment, the user's mask lines are treated in the same way as regular mask lines. When a user masks, the user's mask lines are added to the mask layer following the same rules. When executing the flood fill algorithm to define the image area, the pixels selected by the flood fill algorithm are processed to determine the average brightness of the image, step 131 (Figure 35). With the selection defined, dilation and erosion algorithms are performed, step 133, to close gaps within the selection itself.
[0080] [0080] Once all the different tolerances are defined, step 135 (Figure 35), the application moves to paint the surface determined by having been chosen by the user's "bucket call" based on the tolerance established by the tolerance of 54 (Figure 14), and all calculations from this point are on a pixel by pixel basis. All tolerance selections under the selected tolerance are merged to define the painted area, step 137. In one embodiment, a flood fill algorithm is performed for each tolerance when the tolerance is selected. The location and propagation of each is cached. When an ink action is performed, the process merges from the smallest to the largest tolerance on a single paint surface and the changes applied to the pixel. Thus, the adjustments for each pixel are unique. Each pixel has its LCH adjusted based on its color and the applied color and other weighting factors to maintain a natural look.
[0081] [0081] Once the painted area is defined, the application proceeds to determine whether the pixels being painted belong to a segment identified in the pre-processing, step 141. If so, that the area to be painted is associated with that segment , step 149. If the area selected by the user is not part of a determined segment during pre-processing, the application tries to associate with a screen, steps 143, 147, 150, by checking the deltaE color difference between the color base selected by the user and all segments found during pre-processing. There are weights to try to associate the color with the previously verified segment, if there was one and weight for the size of the screens. In one embodiment, the pixels that have already been painted are stored and, if there is a sufficient Delta difference, the stored luminosity from a previous painting operation is used as a means of correcting errors. If a match is not found in test 147, the color is associated with the overall image in step 145. Once an association has been made with the pixel, the cached luminosity value for that segment from previous paint operations or the selection as a whole is used to determine the base brightness in step 151.
[0082] [0082] Now, using the base luminosity (bl) of the pixel, real luminosity of the pixels (al) and the luminosity of the color selected by the user (cl), an algorithm is executed to determine the luminosity that will be applied to which pixel, step 153.
[0083] [0083] The new luminosity (nl) is defined by nl = (al / bl) * cl. The new luminance (nl) is modified, step 154, weighting it to be closer to the real value (bl) and pulling it close to the average color of the room. Performing these operations also allows the code to correct any radical color changes within the same segment, along with creating a color representation that has a more natural feel. The greater the difference in base brightness (bl) for color brightness (nl), the greater the modifier. As a result, light colors in dark colors do not appear too bright and vice versa. According to one embodiment, the modification algorithm is as follows: (nl * (modifier)) - ((bl-cl) /1.5)
[0084] [0084] Once the new luminosity of each pixel is established, it is averaged with luminance previously found, step 155, to reduce noise and create a smoother transition. With this medium brightness, the newly defined brightness A and B from values of the color selected by the user are used to define the replacement color, step 159 (Figure 36).
[0085] [0085] Once all the pixels have been painted with the replacement color, the selected "rendered" color is placed on a screen separate from the image, step 161. This screen is updated during tolerance changes, step 163. One Once a user has accepted a tolerance in test 167, the color is merged into the painted room, step 172.
[0086] [0086] In an effort to preserve the relative intensity (shading) of the original image in the newly painted image, the following procedure can be applied according to the following single source code implementation of general color substitutions. (Comments in the code follow the double slashes "//"). // uLumins is the luminance of the base color that we defined with the previous functions // modifier is a value that we create so that light rooms painted in dark colors are painted a little lighter // and dark rooms painted in light colors are slightly darker to preserve a natural look and feel. // toColorLCH [0] is the brightness value of the color that we intend to paint with var modifier = uLumins - toColorLCH [0]; if (modifier <0) { modifier = (100 + modifier) / 100; } else if (modifier> 0) { modifier = (100 / (100 - modifier)); } else { modifier = 1; } // luminMapData is the current luminance value of the base image var newLumin = ((luminMap.data [i] / uLumins) * toColorLCH [0]); newLumin = (newLumin * (modifier)) - ((uLumins - toColorLCH [0]) / 1.5); // luminosity has a ceiling of 100, if it is above 100 we put it at 100 if (newLumin> 100) newLumin = 100; // the following pixel code averages against previous pixels as a means of removing noise and correcting errors if (ix! = 0) { var leftLuminI = ((iy * (W * 4)) + ((ix - 1) * 4)); var leftLumin = ColorUtilities.convertRGBToLAB (newImageData.data [leftLuminI], newImageData.data [leftLuminI + 1], newImageData.data [leftLuminI + 2]) [0]; } if (prevLumin> 0 && iy! = selectionDataBoundingRectMinY && ix! = selectionDataBoundingRectMinX && (cachedLumins <10 | Math.abs (prevLumin - newLumin)> 5 | Math.abs (leftLumin - newLumin))> 5) { if (leftLumin> 0 && ix! = 0) { newLumin = ((prevLumin * 2) + (leftLumin * 2) + newLumin) / 5; } else { newLumin = ((prevLumin * 4) + newLumin) / 5; } } // track the previous Lumin for use on the next pixel prevLumin = new Lumin; // new color takes the target color values A and B and inserts our determined luminosity var newColor = ColorUtilities.convertLABToRGB (newLumin, toColorLCH [1], toColorLCH [2]);
[0087] [0087] If a user paints a surface that has already been painted during the session, there is no tolerance check and a flood fill is performed in all areas where the color currently appears, making all contiguous areas the same color painted using methodology described above.
[0088] [0088] The "brush" works out of an algorithm similar to the paint bucket process with the added factor that there is no global painted area to define a base luminosity. The search process to determine whether an image element is of a defined segment to determine the base luminosity, if not, the pixels of actual luminosity is used to define the pixel. In one mode, after painting with the brush, the paint bucket logic is performed to correct the processed color with the additional information and logic that the bucket operation has generated.
[0089] [0089] Those skilled in the art will appreciate that various adaptations and modifications of the illustrative embodiments described above can be configured without departing from the scope and spirit of the invention. Therefore, it is to be understood that, within the scope of the appended claims, the invention can be practiced in a manner other than that specifically described herein.
权利要求:
Claims (18)
[0001]
Automated method to allow a user to paint an image loaded on a computer screen device, the automated method characterized by the fact that it comprises: employing one or more computers to perform a plurality of operations in conjunction with a computer-readable medium and a computer-controlled display device, the operations comprise generating a first display on said computer-controlled display device, the first display comprising: the display of an image of an environment placed by a user; a display of at least one color selected by the user; a first icon comprising a link that allows the selection of a brush tool; a second icon comprising a connection that allows the selection of a rubber tool; and a third icon that comprises a link that allows the selection of a masking tool; allow the user to perform a paint bucket operation to apply a selected color to a first area of the loaded image; allow the user to use the brush tool to fill an area that was lost while applying the selected color to the first area; allow the user to use the eraser tool to remove the color that dripped in an unwanted area during the application of the selected color; and allow the user to use the masking tool to mask a selected area of that environment image in such a way that the color will not be applied to that selected area; where the selection of the brush tool causes a brush size selection display to appear on the first display, the brush size selection display configured to allow the user to select from a plurality of different brush sizes, each of brushes of different sizes allowing the coverage of an area of different size and in which the brush tool is represented in the first view by a movable circle whose radius corresponds to the brush size selected in the brush size selection display.
[0002]
Method, according to claim 1, characterized by the fact that it also comprises the generation of an exposure of a sliding tolerance tool on said first screen and allowing the user to use the sliding tool to increase or decrease the painted area.
[0003]
Method according to claim 2, characterized in that the display of the sliding tolerance tool comprises a darkened area within a right triangle, which can be pulled to the left or right to increase or decrease the painted area.
[0004]
Method, according to claim 1, characterized by the fact that said masking tool allows a user to block an area that should not be painted with a straight line.
[0005]
Method, according to claim 1, characterized by the fact that said masking tool allows a user to generate a polygon in which said first viewer masks an area within the polygon.
[0006]
Method, according to claim 1, characterized by the fact that said paint bucket operation comprises clicking on a selected color and then clicking on an area where the selected color is to be applied.
[0007]
Method according to claim 1, characterized by the fact that the said paint bucket operation employs a bucket movable icon.
[0008]
Method, according to claim 1, characterized by the fact that the first painted area is a wall, a ceiling or a door.
[0009]
Method, according to claim 1, characterized by the fact that it also comprises: determining an average pixel color within each segment of the plurality of segments; comparing the average pixel color for each of the plurality of segments to determine whether a first segment has the same or a similar average pixel color to the second segment; and associate the first and second segments to be displayed with the same color.
[0010]
Method, according to claim 1, characterized by the fact that it also comprises taking a snapshot of the brightness of each pixel and storing the snapshot in memory.
[0011]
Method, according to claim 1, characterized by the fact that the tolerance value is defined as six pixels.
[0012]
Method, according to claim 1, characterized by the fact that the one or more programmed computers are a client computer that performs the processing without the intervention of a server.
[0013]
Method, according to claim 5, characterized by the fact that the method is performed by one or more computers programmed within the browser of one or more programmed computers.
[0014]
Apparatus, characterized by the fact that it comprises: a display device; at least one computing device; and associated data storage memory, the display device, at least one computing device and the associated data storage memory being configured to generate a display on the display device of a user-uploaded environment image and to allow the user to perform a paint bucket operation to apply a selected color to a first area of the loaded image, the display also comprising: a first icon comprising a link that allows the selection of a brush tool to fill an area that was lost during the application of the selected color for the first area; a second icon comprising a connection that allows the selection of an eraser tool to remove the color that dripped in an unwanted area during the application of the selected color; and a third icon that comprises a link that allows the selection of a masking tool to mask an area that should not be painted; where the selection of the brush tool causes a brush size selection display to appear on the display, the brush size selection display configured to allow the user to select from a plurality of different brush sizes, each of the brushes of different sizes allowing the coverage of an area of different size and in which the brush tool is represented in the display by a movable circle whose radius corresponds to the brush size selected in the brush size selection display.
[0015]
Apparatus, according to claim 14, characterized by the fact that it also comprises the generation of a sliding tolerance tool display on said display and allows the user to use the sliding tool to increase or decrease the painted area.
[0016]
Apparatus according to claim 15, characterized by the fact that the display of the sliding tolerance tool comprises a darkened area within a right triangle, which can be pulled to the left or right to increase or decrease the painted area.
[0017]
Method of processing a digital image loaded from an environment performed by one or more programmed computers, the method characterized by the fact that it comprises: execution, with one or more computers programmed, of a bilateral smoothing algorithm in the loaded digital image, in order to remove noise from flat surfaces shown in the loaded digital image of the environment; running, with one or more computers programmed, the Sobel edge detection algorithms in the digital image loaded to generate Sobel image data including a plurality of Sobel edges with each Sobel edge having a corresponding edge resistance value generated by the edge algorithm Sobel edge detection; running, with one or more computers programmed, a Canny edge detection algorithm in the digital image loaded to generate Canny image data including a plurality of Canny edges, the Canny image data being stored separately from the Sobel image data ; execution, with one or more computers programmed, of a flood fill algorithm on Sobel image data to segment the digital image loaded into a plurality of segments that have the same color by comparing a tolerance value with the values edge strength of each of the plurality of Sobel edges and generate a plurality of segment edges corresponding to the Sobel edges having edge strength values greater than the tolerance value, the plurality of segments being formed by connected groupings of the plurality of edges of segment; comparison, with one or more computers programmed, of segment edges for each segment with the plurality of Canny edges to determine whether a particular segment edge is within a predetermined number of pixels from a particular Canny edge and modify the particular segment edge to correspond to the particular Canny edge in response to determining that the particular segment edge is within the predetermined number of pixels from the particular Canny edge; receiving from a user, with one or more computers programmed, a segment selected from the plurality of segments and an ink color selected from a plurality of ink colors; and display, in a display controlled by one or more programmed computers, the loaded digital image of the environment with the selected segment painted by the selected ink color.
[0018]
Computer-readable non-transitory medium storing computer-readable instructions that are executable by one or more computers to perform a method of processing a digital image loaded from an environment, the method characterized by the fact that it comprises: execution, with one or more computers programmed, of a bilateral smoothing algorithm in the loaded digital image, in order to remove noise from flat surfaces shown in the loaded digital image of the environment; running, with one or more computers programmed, a Sobel edge detection algorithm on the digital image loaded to generate Sobel image data including a plurality of Sobel edges with each Sobel edge having a corresponding edge resistance value generated by the algorithm Sobel edge detection; running, with one or more computers programmed, a Canny edge detection algorithm on the digital image loaded to generate Canny image data including a plurality of Canny edges, the Canny image data being stored separately from the Sobel image data ; execution, with one or more computers programmed, of a flood fill algorithm in the Sobel image data to segment the digital image loaded in a plurality of segments having the same color by comparing a tolerance value with the resistance values edge of each of the plurality of Sobel edges and generate a plurality of segment edges corresponding to the Sobel edges having edge resistance values greater than the tolerance value, the plurality of segments being formed by connected groupings of the plurality of segment edges; comparison, with one or more computers programmed, of the segment edges for each segment with the plurality of Canny edges to determine whether a particular segment edge is within a predetermined number of pixels from a particular Canny edge and modify the particular segment edge to correspond to the particular Canny edge in response to determining that the particular segment edge is within the predetermined number of pixels from the particular Canny edge; receiving from a user, with one or more computers programmed, a segment selected from the plurality of segments and an ink color selected from a plurality of ink colors; and display, in the display controlled by one or more programmed computers, the loaded digital image of the environment with the selected segment painted by the selected ink color.
类似技术:
公开号 | 公开日 | 专利标题
BR102016005244A2|2020-09-24|METHOD, APPARATUS AND NON-TRANSITIONAL MEDIA READABLE BY COMPUTER TO ALLOW A USER TO PAINT AN IMAGE
US10565757B2|2020-02-18|Multimodal style-transfer network for applying style features from multi-resolution style exemplars to input images
US8831371B2|2014-09-09|Methods and apparatus for applying blur patterns to images
US10127327B2|2018-11-13|Cloud-based image processing web service
JP4996679B2|2012-08-08|Collage generation using occlusion cost calculation
US20150347276A1|2015-12-03|Screenshot validation testing
US20090271436A1|2009-10-29|Techniques for Providing a Virtual-World Object Based on a Real-World Object Description
US20210256708A1|2021-08-19|Interactive image matting using neural networks
US20140198103A1|2014-07-17|Method for polygon reduction
CN107492142A|2017-12-19|The stylization based on illuminated guidance example that 3D is rendered
US9299189B1|2016-03-29|Techniques for updating design file lighting values
JP2016526232A|2016-09-01|Volume rendering of images by multiple classification
CN111723902A|2020-09-29|Dynamically estimating lighting parameters for a location in an augmented reality scene using a neural network
US20200388086A1|2020-12-10|Blend shape system with dynamic partitioning
US20150193950A1|2015-07-09|Simulating color diffusion in a graphical display
KR102173546B1|2020-11-03|Apparatus and method of rendering game objects
US9679398B2|2017-06-13|Rendering images using color contribution values of render elements
US9779529B2|2017-10-03|Generating multi-image content for online services using a single image
US11275454B2|2022-03-15|Paint your place application for optimizing digital painting of an image
US10275925B2|2019-04-30|Blend shape system with texture coordinate blending
US20210150681A1|2021-05-20|Systems and Methods for Selective Enhancement of Objects in Images
US20140354627A1|2014-12-04|Rendering a 3d shape
US10067914B2|2018-09-04|Techniques for blending document objects
JPWO2018203374A1|2020-05-28|Line drawing automatic coloring program, line drawing automatic coloring device and program for graphical user interface
CN113657396A|2021-11-16|Training method, translation display method, device, electronic equipment and storage medium
同族专利:
公开号 | 公开日
US20180074605A1|2018-03-15|
PE20161056A1|2016-10-26|
CN105989622A|2016-10-05|
CL2016000511A1|2017-03-31|
CN111666017A|2020-09-15|
CL2018000898A1|2018-06-08|
US10416790B2|2019-09-17|
US20160275702A1|2016-09-22|
US20200393917A1|2020-12-17|
US9857888B2|2018-01-02|
US20190339794A1|2019-11-07|
CN105989622B|2020-07-21|
AR103948A1|2017-06-14|
KR20160111864A|2016-09-27|
MX2016003310A|2016-09-26|
US10795459B2|2020-10-06|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

US3232942A|1964-06-02|1966-02-01|Sandoz Ltd|1-substituted -lysergol|
US7391929B2|2000-02-11|2008-06-24|Sony Corporation|Masking tool|
US6850651B2|2001-07-02|2005-02-01|Corel Corporation|Moiré correction in images|
US20060001677A1|2003-11-06|2006-01-05|Marc Webb|Color selection and coordination system|
US7230629B2|2003-11-06|2007-06-12|Behr Process Corporation|Data-driven color coordinator|
CN1797472A|2004-12-20|2006-07-05|甲尚股份有限公司|System and method for processing roles of cartoon|
JP4085123B1|2007-03-29|2008-05-14|株式会社サピエンス|Image display update method, server / client system, and drawing operation echo back script|
CN101681432B|2007-05-01|2013-11-06|计算机连接管理中心公司|Photo-document segmentation method and system|
WO2008147527A1|2007-05-23|2008-12-04|Dean Robert Gary Anderson|Software for creating engraved images|
US20080310747A1|2007-05-23|2008-12-18|Dean Robert Gary Anderson As Trustee Of D/L Anderson Family Trust|Software for creating engraved images|
CN101739704B|2008-11-21|2012-05-02|台达电子工业股份有限公司|Polygon rapid color filling method|
CN101602315B|2009-06-30|2012-09-12|张磊|Method for preparing oil painting through image processing|
US9563342B2|2009-07-22|2017-02-07|Behr Process Corporation|Automated color selection method and apparatus with compact functionality|
US8335374B2|2009-08-12|2012-12-18|Genetix Corporation|Image segmentation|
US8760464B2|2011-02-16|2014-06-24|Apple Inc.|Shape masks|
DK2702546T3|2011-04-29|2021-03-15|American Greetings Corp|Systems, methods and apparatuses for creating, editing, distributing and viewing electronic greeting cards|
US9041727B2|2012-03-06|2015-05-26|Apple Inc.|User interface tools for selectively applying effects to image|
US9123174B2|2012-04-03|2015-09-01|Ppg Industries Ohio, Inc.|Method and apparatus for displaying a simulated application of at least one coating to a digital image|
US20140040789A1|2012-05-08|2014-02-06|Adobe Systems Incorporated|Tool configuration history in a user interface|
US8957915B1|2012-06-14|2015-02-17|Cinemagram Inc.|Method, apparatus and system for dynamic images|
US9190016B2|2013-03-15|2015-11-17|Valspar Sourcing, Inc.|Color-matching tool for virtual painting|
US20160054839A1|2013-04-16|2016-02-25|Artware, Inc.|Interactive Object Contour Detection Algorithm for Touchscreens Application|
US9857888B2|2015-03-17|2018-01-02|Behr Process Corporation|Paint your place application for optimizing digital painting of an image|US9563342B2|2009-07-22|2017-02-07|Behr Process Corporation|Automated color selection method and apparatus with compact functionality|
US9857888B2|2015-03-17|2018-01-02|Behr Process Corporation|Paint your place application for optimizing digital painting of an image|
US10304125B1|2016-02-26|2019-05-28|Amazon Technologies, Inc.|Method and system for color capture and presentation enhancement|
FR3050672B1|2016-04-29|2018-11-23|Les Companions|AUTOMATE FOR TREATING A SURFACE|
USD823317S1|2016-06-07|2018-07-17|Beijing Kingsoft Internet Security Software Co., Ltd.|Mobile communication terminal with graphical user interface|
US11087388B1|2016-10-31|2021-08-10|Swimc Llc|Product-focused search method and apparatus|
US10901576B1|2016-11-01|2021-01-26|Swimc Llc|Color selection and display|
US10452751B2|2017-01-09|2019-10-22|Bluebeam, Inc.|Method of visually interacting with a document by dynamically displaying a fill area in a boundary|
US11062373B2|2017-05-10|2021-07-13|Behr Process Corporation|Systems and methods for color coordination of scanned products|
US10824317B2|2017-06-14|2020-11-03|Behr Process Corporation|Systems and methods for assisting with color selection|
CN107357570B|2017-06-20|2021-01-26|广东小天才科技有限公司|Color filling image generation method and user terminal|
US10809884B2|2017-11-06|2020-10-20|The Sherwin-Williams Company|Paint color selection and display system and method|
USD870745S1|2018-05-07|2019-12-24|Google Llc|Display screen or portion thereof with graphical user interface|
USD877183S1|2018-05-07|2020-03-03|Google Llc|Display screen or portion thereof with transitional graphical user interface|
USD877182S1|2018-05-07|2020-03-03|Google Llc|Display screen or portion thereof with transitional graphical user interface|
USD888755S1|2018-05-07|2020-06-30|Google Llc|Display screen or portion thereof with transitional graphical user interface|
USD870746S1|2018-05-07|2019-12-24|Google Llc|Display screen or portion thereof with graphical user interface|
USD877161S1|2018-05-07|2020-03-03|Google Llc|Display screen or portion thereof with transitional graphical user interface|
USD877181S1|2018-05-07|2020-03-03|Google Llc|Display screen or portion thereof with graphical user interface|
CN109062484A|2018-07-30|2018-12-21|安徽慧视金瞳科技有限公司|A kind of manual exposure mask picture capturing method of interactive mode Teaching System|
KR20210110860A|2018-12-31|2021-09-09|이-데알 에스.알.엘.|Anthropometric data portable acquisition device and anthropometric data acquisition method|
CN111830834B|2019-04-15|2021-02-09|南通市联缘染业有限公司|Equipment control method based on environment analysis|
CN110895825B|2019-11-05|2021-09-17|山东新潮信息技术有限公司|Missing data detection system and method|
USD921036S1|2019-11-12|2021-06-01|West Pharmaceutical Services, Inc.|Display screen or portion thereof with icon|
CN110989897B|2019-11-21|2021-12-03|富途网络科技有限公司|Screenshot picture acquisition method and device, terminal device and storage medium|
US11138775B1|2020-08-06|2021-10-05|Richard R. Rosser|Interactive illustration system, interactive animation system, and methods of use|
CN113568557B|2021-09-26|2022-02-11|广州朗国电子科技股份有限公司|Method for rapidly switching writing thickness and color of painting brush and electronic equipment|
法律状态:
2018-04-24| B12F| Other appeals [chapter 12.6 patent gazette]|
2020-07-14| B150| Others concerning applications: publication cancelled [chapter 15.30 patent gazette]|Free format text: ANULADA A PUBLICACAO CODIGO 15.21 NA RPI NO 2458 DE 14/02/2018 POR TER SIDO INDEVIDA. |
2020-09-24| B03A| Publication of a patent application or of a certificate of addition of invention [chapter 3.1 patent gazette]|
2020-10-06| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]|
2020-10-13| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]|
优先权:
申请号 | 申请日 | 专利标题
US201562134250P| true| 2015-03-17|2015-03-17|
US62/134,250|2015-03-17|
US15/053,852|US9857888B2|2015-03-17|2016-02-25|Paint your place application for optimizing digital painting of an image|
US15/053,852|2016-02-25|
[返回顶部]