专利摘要:
The present invention stores the bump image for the three-dimensional model of the object and corrects it at the rendering stage as with the texture image. In addition, an inverse mapping address of the texture image or the bump image is stored in the rendering memory, and a drawing target address such as a texture image is obtained from the address of the rendering memory.
公开号:KR19990063173A
申请号:KR1019980055933
申请日:1998-12-18
公开日:1999-07-26
发明作者:히로시 나가시마
申请人:시마 마사히로;가부시키가이샤 시마세이키 세이사쿠쇼;
IPC主号:
专利说明:

Image processing device
The present invention relates to an image processing apparatus, and more particularly, to rendering of a 3D image (three-dimensional image).
In the processing of a three-dimensional image, a polygonal normal vector is obtained by dividing a three-dimensional model of an object into a plurality of polygons. At the same time, the texture image is mapped onto the polygonal surface of the three-dimensional object by two-dimensional texture image, and the texture image is attached to the polygonal surface in a pseudo manner. The texture can be modified in the same manner as the two-dimensional image, but it is very difficult to change the three-dimensional model. In addition, it is difficult to model the delicate convexities of the surface of the three-dimensional model, and it is possible to modulate the convexities of the object surface by mapping a bump image indicating the modulation direction of the normal vector to the surface of the three-dimensional object. There is a bump mapping technique.
In the correction of the bump image, it is necessary to map the bump image to the object surface, modulate its normal direction, and then correct the bump image by imagining the display image obtained by rendering. If the bump image is corrected to some extent, the modified bump image is remapped to the surface of the three-dimensional object and re-rendered to view the display image and confirm the result of the bump image correction. If the desired correction is not made, correct the bump image again. For this reason, it is necessary to correct the bump image again and again.
An object of the present invention is to map a bump image to an object and to interactively modify the bump image using the image obtained by the rendering, and to map and re-render the result of the bump image correction to the article in real time. This makes it easier to express the convexities of the objects.
Another object of the present invention is to facilitate reverse mapping from a display image obtained by rendering to a texture image or a bump image so that the correction of the bump image can be made more interactive.
Another object of the present invention is to enable a texture image and a bump image to be corrected in the same processing system.
Another object of the present invention is to make it easier to calculate the normal vector of the bump image.
Another object of the present invention is to make reverse mapping in a texture image or a bump image a common reverse mapping address.
1 is a main block diagram of an image processing apparatus according to an embodiment;
2 is a block diagram of a normal vector calculation unit in the embodiment;
3 is a diagram showing a data configuration of a rendering memory in the embodiment;
4 is a diagram showing a relationship between an image of a rendering memory and a texture image in the embodiment;
5 is a flowchart showing a rendering algorithm in the embodiment;
6 is a view showing the action of a bump image;
7 is an explanatory diagram showing the action of a bump image in the embodiment;
8 is a view showing a display image after changing the bump image of FIG.
9 is a view showing a display image when the light source setting is changed in FIG. 8;
Fig. 10 is a diagram showing a pen shape used for correcting a bump image or the like.
Explanation of symbols on the main parts of the drawings
2: geometry 4: polygon sequencer
6: vertex normal data calculation unit
8: normal vector computing unit
10: burn source
12: texture image memory
14 bump memory
17,18: interpolation interpolation processing unit
20: normal vector calculation unit 22: normal modulation processing unit
24: light source color setting unit
26: pong shading processing unit
28: hidden surface processor
30: rendering memory
34 monitor 36 drawing input means
38: address generator
40: drawing processing unit 42: area memory
44: selector 46: linear interpolation processing unit
50,51: normal vector table 52: globalization table
60: color data layer 61: Z data layer
62: reverse mapping address layer
64: display area 65: rendering area
66: surface image 68: texture image
70,71: pen
The present invention provides an image processing apparatus for mapping a two-dimensional texture image onto a three-dimensional model of an object and displaying the image by rendering.
A bump memory for storing bump images of the three-dimensional model;
Modulating means for modulating the normal direction of the three-dimensional model with a bump image,
Mapping means for mapping the two-dimensional texture image to a three-dimensional model whose normal direction is modulated;
Rendering means for obtaining and displaying a display image by rendering a three-dimensional model mapped with a two-dimensional texture image;
And a drawing means for specifying a position in the display image and correcting a bump image corresponding to an address of a bump memory corresponding to the designated position.
Here, the texture image is a two-dimensional image representing the color, shadow, shape, etc. of the object surface, and the bump image is, for example, a two-dimensional image for modulating the normal direction of the three-dimensional model of the object. Also, in this specification, mapping is mapping from a texture image or bump image to a three-dimensional object surface, inverse mapping is mapping from a rendering image to a texture image or bump image, and rendering is dependent on how the object looks at the viewpoint. The three-dimensional image is converted into a two-dimensional image. In the case of the embodiment, the rendering means includes a pong shading processing unit, a masking surface processing unit, a rendering memory, a monitor, and the like.
Preferably, a rendering memory for storing the display image obtained in the rendering is formed,
And a demapping address for demapping the two-dimensional texture image and the bump image in the display image in the rendering memory.
Preferably, the drawing means is configured to selectively supply and correct one of the two-dimensional texture image and the bump image.
Preferably, the apparatus includes a means for obtaining a normal vector in two planes for each position of the bump image, and a reference table for obtaining a normal direction of the bump image from the obtained two normal vectors.
Preferably, the texture image and the bump image are scaled (sizing the image) and stored in a common coordinate system.
(Example)
1 to 9 show examples. 1 shows an outline of an image processing apparatus in the embodiment, 2 is geometry, and the surface of an object is divided into a plurality of polygons, for example, a plurality of triangles, and 4 is a polygon based on scenedetermination determination. As a sequencer, it outputs a number of triangles in order. 6 is a vertex normal data calculation unit to obtain normal data of each vertex, for example, a normal vector. 8 is a normal vector calculation unit. For example, the normal vector of each pixel is obtained by interpolating the normal vectors of three vertices.
10 denotes an image source, for example, an original image or a bump image of a two-dimensional texture image using a scanner, a file storing a two-dimensional image, or an input / output device thereof. And so on. 12 prepares a plurality of planes as a texture memory and stores a plurality of texture images. Similarly, 14 is a bump memory, in which a plurality of planes are prepared to store a two-dimensional bump image. For example, a bump image is a monochrome image, and the value of the image represents the convexity of the surface of the article, for example, the image is convex in the bright position and concave in the dark position. Also having meaning in the bump image is the relative convexity with respect to the surroundings. Even when there are a plurality of texture images and bump images, they are scaled to fit, eliminated offsets, and stored in a common coordinate system.
17 and 18 are interpolation interpolation processing units which interpolate sub-pixel addresses by interpolating the image data of the memory 12 and l4. 20 denotes a normal vector calculation unit of the bump image, which will be described later with reference to FIG. 22 is a normal vector modulator for modulating normal vectors of each polygon into normal vectors obtained from normal images.
Reference numeral 24 denotes a light source color setting unit that sets a three-dimensional position of a light source and its brightness or color tone. Reference numeral 26 is a phong shading processing unit, which calculates the luminance of each polygon by using the normal vector of each modulated polygon and the position of the light source and considering the reflection between the polygons. 28 denotes a polygonal surface appearing on a surface excluding a polygon that is a hidden surface as a masking surface processing unit; 30 denotes a display image (rendered image) obtained by rendering as a rendering memory and displays the image on the monitor 34 do. In addition, the structure of the rendering memory 30 is mentioned later based on FIG.
36 denotes a drawing position on the monitor 34 by using a combination of a digitizer and a stylus as a drawing input means and using a cursor or the like. In addition, the kind of drawing input by the drawing input means 36 is specified by the menu etc. which are managed by the front-end processor which is not shown in figure. 38 denotes an address generator and converts a drawing position in the drawing input means 36 into an address in the rendering memory 30 and supplies it to the rendering memory 30. 40 is a drawing processing unit, and 42 is an area memory for temporarily storing an image during the drawing operation. 44 selects, as a selector, one of a texture image and a bump image, and writes the selected image and the drawing image stored in the area memory 42 by linear interpolation in the same memory. Put it in.
In Fig. 1, I denotes a designated coordinate (drawing position) in a drawing input such as a digitizer, M denotes a mapping address for mapping from a texture image or a bump image to the surface of an object, and M 'denotes a texture image from a rendered image. Or a reverse mapping address for reverse mapping to a bump image. P and Q represent the output of the linear interpolation processor 46, P represents the output from the texture image, Q represents the output from the bump image, R represents the input from the texture image to the selector 44, and S represents the bump image. Means an input to the selector 44.
2 shows the configuration of the normal vector calculation unit 20, the coordinates in the two-dimensional bump image are referred to as (i, j), for example, a pixel (u) (i, j) for obtaining a normal vector. Using four adjacent points of), the normal vectors in two vertical planes are obtained from the normal vector tables 50 and 51. The tangent vector in the i-direction vertical plane is parallel to u (i + 1, j) -u (i-1, j), and the normal vector is perpendicular to it, so u (i + 1, j) and u (i-1 , j) is used to read the first normal vector nx from the normal vector table 50. Similarly, u (i, j + 1) -u (i, j-1) is used to read the normal vector ny in the j-direction vertical plane. If these two normal vectors are zoned, then a normal vector in three-dimensional space can be obtained, and using the zoned table 52,
N = (nx, ny) / (nx 2 + ny 2 ) 1/2
Read the vector corresponding to, and set it as the normal vector. For this reason, the normal vector can be obtained by two-step reading from the tables 50, 51, and 52 without calculating the cross product. In addition, the calculation method of the normal vector itself is arbitrary.
3 shows the structure of the rendering memory 30, 60 denotes a color data layer, 61 denotes a Z data layer, and 62 denotes an inverse mapping address layer, which stores image data after rendering, for example, each 32 bits in length. The color data layer 60 stores, for example, a color signal of each pixel in a total of 24 bits of 8 bits of RGB, and stores an opacity α of each pixel in 8 bits of length. The Z data layer 61 is, for example, 32 bits long and the Z value specifies the position of the surface relative to the viewpoint. That is, the smaller the Z value, the closer to the point of view. The larger the Z value, the farther from the point of view. The reverse mapping address layer 62 stores an inverse mapping address M 'for reverse mapping from each pixel to a texture image or a bump image, for example, in a 32-bit length. As a display area 64, a portion of the display area 64 in the rendered image is displayed on the monitor 34. As shown in FIG.
Here, when an object is added, the Z value of the polygon of the added object surface is compared with the Z value of the Z data layer 61, and the smaller side is the surface and the larger side is the obscured surface. For example, the image (color value C1) of the color data layer 60 is the surface (opacity α1), and the polygon of the surface of the object added is the occlusion surface (color value C2, opacity α2). In this case, the color value of the rendered image after the object addition is α 1 · C 1 + (1-α 1) · α 2 · C 2, and the opacity is (α 1 + α 2) −α 1 · α 2. When the opacity is 0, the surface is completely opaque. When the opacity is 0, the surface is completely transparent. In addition, the reverse mapping address layer 62 is used at the time of correcting the texture image or the bump image, and reads from the reverse mapping address layer 62 an address for reverse mapping the drawing position in the rendered image to the texture image or the bump image.
In Fig. 4, the relationship between the rendered image and the textured image is shown, where 65 is a rendering region to be rendered and 66 is a display image. The display image 66 is a pseudo-attach of the texture image 68 to the multi-shape of the three-dimensional object surface, and deforms the texture image 68 based on texture mapping.
5 shows a rendering algorithm of the embodiment. In the pre-rendering step, polygon data obtained by dividing an object into a plurality of polygons is supplied from the sequencer 4, and the vertex normal data calculator 6 obtains three-dimensional coordinates of the vertices, and the normal vector calculator 8 normals each pixel. Find the vector. In addition, the original image of the texture image or the original image of the bump image is supplied from the image source 10 and stored in the texture memory 12 or the bump memory 14. The interpolation interpolation processing unit generates a mapping address M for mapping the texture image or the bump image at the position of the polygon in the mapping address generator 16 and the inverse mapping address M 'from the rendered image to the texture image or the bump image. The subpixel coordinates are interpolated at (17, 18), the normal modulation processing unit 22 modulates the normal direction, and the Phong shading processing unit 26 performs the shading process. The mask surface processing unit 28 performs the mask surface processing to store the rendered image in the rendering memory 30. At this time, the inverse mapping address M 'is stored in the rendering memory 30.
In addition, after mapping a texture image or a bump image to a three-dimensional object, it is necessary to adjust their size or coordinate origin by scaling, and store them in the texture memory 12, the bump memory 14, or another memory again with the coordinate system in common. desirable. When there are a plurality of texture images and bump images, the coordinate system is common to all texture images and bump images. Then, it is possible to easily reverse map all texture images or bump images with a common reverse mapping address M '.
The drawing starts after the rendering process and is drawn by the drawing input means 36. In addition, the selector 44 selects whether the bump image is to be processed or the texture image is processed by the menu, thereby correcting the selected image.
In this case, when drawing is inputted, the drawing processing unit 40 and the area memory 42 are used to obtain, for example, an input drawing object for one stroke, and the texture image or bump image and the linear interpolation processing unit 46 are mixed to the original address. Rewrite Both the drawing in the bump image and the drawing in the texture image can be processed in a common architecture except that the selector 44 switches the processing object in the drawing process in the two-dimensional image. In the case of writing to a bump image, as in the case of writing to a texture image, if a large number of virtual drawing tools (pens) are prepared, one of them is selected from a menu screen (not shown) and the like is input from the drawing input means 36. good. Fig. 10 shows the distribution of pen output concentrations, where the Y axis represents concentration and the X axis represents distance from the pen center. 70 represents a simple pen and 71 represents an edge-enhancing pen with a valley around it.
In the embodiment, since the reverse mapping address M 'is stored in the rendering memory 30, the coordinates designated by the drawing input means 36 are converted into addresses from the address generator 38 to the rendering memory 30, and then reverse mapping. Data of a texture image or bump image obtained by obtaining the reverse mapping address from the address layer 62 and supplied from the selector 44 is supplied to the linear interpolation processor 46. That is, when the writing input means 36 specifies the position of the pen, it is converted into an address of the rendering memory and then reverse-mapped into a texture image or bump image. For this reason, the reverse mapping can be calculated at high speed without having to obtain the reverse mapping address each time in the mapping address generator 16, and the bump image and the texture image can be corrected more interactively.
Then, the new image obtained by the linear interpolation processor 46 is rewritten into the original memories 12 and 14. By doing this, the corresponding polygon rendering is processed again, the color data or the opacity in the rendering memory 30 are corrected, and the correction result is displayed on the monitor 34.
6 shows the principle of bump mapping. Fig. 6 shows the direction of the object surface on the bump-free surface at the top, and shows the normal vector of each part of the object surface with an arrow. Below the upper part of Fig. 6, data of a bump image (for example, light is convex and dark is concave in the contrast value of the bump image) is shown. The bump image is mapped to a surface without bumps. That is, if the direction of the normal vector obtained from the bump-free surface is modulated in the direction of the normal vector obtained from the bump image, the direction of the normal vector changes as shown in the lower part of FIG. For example, if there are two concave portions in the bump image, the normal direction of the object surface changes as shown in the lower part of FIG. 6 at this portion, which is the same as the normal direction of the object surface is corrected in the bump image. The bump image is corrected in real time in the rendering step using the drawing input means 36, so that the convex object surface can be corrected for the image of the rendering result. Since the correction result of the convex portion due to the correction of the vamp image is displayed in real time on the monitor 34, the bump image can be corrected while checking the correction result of the convex portion.
7 to 9 show the operation of the bump image, FIG. 7 is composed of three objects of the sea surface, the island and the sky as the background, so that the texture memory 12 has three such as the sea surface image, the island image and the sky image as the background. Images are stored in three planes. Similarly, the bump memory 14 stores bump images of the sea surface and the island in two planes. A vamp image is, for example, a monochrome image, the brighter the image is convex with respect to the surroundings, and the darker the image is concave with respect to the surroundings. The difference in local contrast in the bump image indicates the extreme convexity of the object. When the bump image is to be corrected, the bump memory 14 can be selected by the selector 44, and the image can be corrected in the same manner as when the texture image is corrected by the drawing input means 36.
In the state of Fig. 7, two waves are present on the surface of the sea, so that the peaks of the waves are convex, and a bump image is formed so that the waves are concave. Here, the drawing input means 36 adds a new wave. The image then changes as shown in FIG. 7 and 8, the bump image is shown in the cross section A direction. Here, changing the light source position as shown in Fig. 8 to Fig. 9 changes the normal direction of the surface of the object with respect to the light source, and the contrast of the object is changed as shown in Fig. 9.
The present invention stores the bump image of the 3D model in the bump memory so that the bump image can be interactively modified during the rendering process. When the bump image is modified and the normal direction of the surface is modulated in the 3D model, the bump image after the correction is mapped to the object in real time, and the object to which the texture image is mapped is rendered and displayed again, and the resulting correction results in a convex shape. The bump image can be corrected while displaying in time. The bump image may be an image for each model, or an image including a plurality of models.
Preferably, when the reverse mapping address is read from the rendering memory by storing an inverse mapping address for reverse mapping from the rendered image to the texture image or the bump image, the drawing address can be directly obtained from the texture image or the bump image. You can modify texture images and bump images in real time.
Preferably, one of the texture image and the bump image is supplied to the common drawing means for correction. That is, the drawing in a texture image and the drawing in a bump image can also be processed by common drawing means.
Preferably, by obtaining a normal vector of the object surface modulation direction in the bump image with respect to two planes, such as XZ and YZ, the normal direction of the bump image can be obtained from the reference table so that the normal direction can be obtained without cross-cutting operation. Can be.
Preferably, the texture image and the bump image are scaled and stored in a common coordinate system. The same reverse mapping address can then be reverse mapped in both directions of the texture image and the bump image.
权利要求:
Claims (5)
[1" claim-type="Currently amended] An image processing apparatus for mapping a two-dimensional texture image onto a three-dimensional model of an object and displaying the image by rendering,
A bump memory for storing a bump image of the three-dimensional model;
Modulating means for modulating the normal direction of the three-dimensional model with a bump image,
Mapping means for mapping the two-dimensional texture image to a three-dimensional model whose normal direction is modulated;
Rendering means for obtaining and displaying a display image by rendering a three-dimensional model mapped with a two-dimensional texture image;
And drawing means for designating a position in the display image and correcting a bump image corresponding to an address of a bump memory corresponding to the designated position.
[2" claim-type="Currently amended] The method of claim 1
A rendering memory for storing the display image,
And an inverse mapping address for inversely mapping the two-dimensional texture image and the bump image into a display image in the rendering memory.
[3" claim-type="Currently amended] The method of claim 1
And selection means for selectively supplying one of the two-dimensional texture image and the bump image to the drawing means.
[4" claim-type="Currently amended] The method of claim 1
Means for obtaining a normal vector in two planes for each position of the bump image,
And a reference table for obtaining a normal direction of a bump image from the obtained two normal vectors.
[5" claim-type="Currently amended] The method of claim 1,
And the texture image and the bump image are scaled and stored in a common coordinate system.
类似技术:
公开号 | 公开日 | 专利标题
JP2020091890A|2020-06-11|Change in effective resolution based on screen position in graphics processing by approximating projection of vertices on curved viewport
US8610729B2|2013-12-17|Floating point computer system with fog
US6437782B1|2002-08-20|Method for rendering shadows with blended transparency without producing visual artifacts in real time applications
US6016150A|2000-01-18|Sprite compositor and method for performing lighting and shading operations using a compositor to combine factored image layers
US7034828B1|2006-04-25|Recirculating shade tree blender for a graphics system
US5097427A|1992-03-17|Texture mapping for computer graphics display controller system
US5179638A|1993-01-12|Method and apparatus for generating a texture mapped perspective view
KR100362704B1|2003-02-14|Image data generation method and image data processing apparatus and recording medium
US5880736A|1999-03-09|Method system and computer program product for shading
US5903270A|1999-05-11|Method and apparatus for mapping a two-dimensional texture onto a three-dimensional surface
US20170287200A1|2017-10-05|Dual fisheye image stitching for spherical image content
US6801202B2|2004-10-05|Graphics system configured to parallel-process graphics data using multiple pipelines
KR970003325B1|1997-03-17|Computer graphics display method and system with shadow generation
US5544291A|1996-08-06|Resolution-independent method for displaying a three dimensional model in two-dimensional display space
US5896136A|1999-04-20|Computer graphics system with improved blending
US7129941B2|2006-10-31|Sample replication mode with depth value calculation
US6426755B1|2002-07-30|Graphics system using sample tags for blur
EP0622747B1|2000-05-31|Method and apparatus for an adaptive texture mapping controller
US6580430B1|2003-06-17|Method and apparatus for providing improved fog effects in a graphics system
DE69909437T2|2004-04-15|Graphic system with super scans with variable resolution
US5949424A|1999-09-07|Method, system, and computer program product for bump mapping in tangent space
US6417858B1|2002-07-09|Processor for geometry transformations and lighting calculations
US6975327B2|2005-12-13|Applying multiple texture maps to objects in three-dimensional imaging processes
KR100261076B1|2000-07-01|Rendering method and apparatus of performing bump mapping and phong shading at the same time
US5224208A|1993-06-29|Gradient calculation for texture mapping
同族专利:
公开号 | 公开日
JPH11185054A|1999-07-09|
JP3035571B2|2000-04-24|
DE69824378T2|2004-10-07|
KR100559127B1|2006-11-10|
EP0924642A2|1999-06-23|
DE69824378D1|2004-07-15|
EP0924642A3|2001-04-04|
US6340974B1|2002-01-22|
EP0924642B1|2004-06-09|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题
法律状态:
1997-12-22|Priority to JP9-365797
1997-12-22|Priority to JP9365797A
1998-12-18|Application filed by 시마 마사히로, 가부시키가이샤 시마세이키 세이사쿠쇼
1999-07-26|Publication of KR19990063173A
2006-11-10|Application granted
2006-11-10|Publication of KR100559127B1
优先权:
申请号 | 申请日 | 专利标题
JP9-365797|1997-12-22|
JP9365797A|JP3035571B2|1997-12-22|1997-12-22|Image processing device|
[返回顶部]