专利摘要:
A method for processing a video signal according to the present invention may comprise: a step of determining a set of transformations for a current block comprising a plurality of transformation type candidates; a step of determining a transformation type of the current block from the plurality of transformation type candidates; and a step of performing an inverse transformation on the current block on the basis of the transformation type of the current block.
公开号:ES2710807A2
申请号:ES201890057
申请日:2017-03-28
公开日:2019-04-26
发明作者:Bae Keun Lee
申请人:KT Corp;
IPC主号:
专利说明:

[0001]
[0002] Method and apparatus for processing video signals
[0003]
[0004] Technical field
[0005] The present invention relates to a method and device for processing a video signal.
[0006] Background of the technique
[0007] Currently, there has been an increase in requests for high-resolution, high-quality images such as high-definition (HD) images and ultra-high definition (UHD) images in various fields of application. However, the higher image quality and resolution data have more and more amounts of data compared to conventional image data. Therefore, when image data is transmitted using a medium such as conventional wire and wireless networks, or when image data is stored using a conventional storage medium, the cost of transmission and storage increases. To solve these problems that occur with an increase in the resolution and quality of image data, high efficiency image encoding / decoding techniques can be used.
[0008] Image compression technology includes various techniques, including: a prediction technique of predicting a pixel value included in a current instantaneous from an instant before or after the current instantaneous; an intra-prediction technique of predicting a pixel value included in a current instantaneous using pixel information in the current instantaneous; an entropy coding technique of assigning a short code to a value with a high frequency of occurrence and of assigning a long code to a value with a low frequency of occurrence; etc. The image data can be compressed effectively using such image compression technology, and can be transmitted or stored.
[0009] Meanwhile, with requests for high resolution images, the requests for stereographic image content, which is a new image service, have also increased. A video compression technique is being analyzed to efficiently provide stereographic image content with high resolution and ultra-high resolution.
[0010] Divulgation
[0011] Technical problem
[0012] An object of the present invention is intended to provide a method and apparatus for hierarchically partitioning a coding block by encoding / decoding a video signal.
[0013] An object of the present invention is intended to provide a method and apparatus for selectively performing a reverse transform for a target coding / decoding block by encoding / decoding a video signal.
[0014] An object of the present invention is intended to provide a method and apparatus for adaptively determining a type of transform according to a target coding / decoding block by inverse transformation of the target coding / decoding block.
[0015] An object of the present invention is intended to provide a method and apparatus for deriving the residual value of quantization parameter according to a characteristic of a coding / decoding target block by encoding / decoding a video signal.
[0016] The technical objects to be achieved by the present invention are not limited to the aforementioned technical problems. And, other technical problems will be understood that are not mentioned clearly by the experts in the matter from the following description.
[0017] Technical solution
[0018] A method and apparatus for decoding a video signal according to the present invention can determine a transform set comprising a plurality of transform type candidates for a current block, determining a transform type of the current block from the plurality of candidates of type of transformation, and perform a reverse transformation for the current block based on the type of transformation of the current block.
[0019] In the method and apparatus for decoding a video signal according to the present invention, the transform type of the current block can be selected adaptively from the plurality of transform type candidates according to whether the current block satisfies a condition default
[0020] In the method and apparatus for decoding a video signal according to the present invention, the predetermined condition can be determined adaptively according to a current block size.
[0021] In the method and apparatus for decoding a video signal according to the present invention, the inverse transform may comprise a horizontal direction transform and a vertical direction transform for the current block, and the transform set may comprise a first set of transformation for the horizontal direction transformation and a second transformation set for the vertical direction transformation.
[0022] In the method and apparatus for decoding a video signal according to the present invention, the transform set can be specified by index information obtained from a bit stream. In the method and apparatus for decoding a video signal according to the present invention, the transform set can be determined based on a transform set of a block having the same prediction mode or similar as the current block between decoded blocks. previously for the current block.
[0023] In the method and apparatus for decoding a video signal according to the present invention, the method and the apparatus can decode information indication of whether the inverse transform is omitted for the current block, and determine whether to perform the inverse transform for the current block based on information.
[0024] In the method and apparatus for decoding a video signal according to the present invention, the information may comprise information indicating whether a reverse inverse of the horizontal direction of the current block is omitted and information indicating if a reverse address transformation is omitted. vertical of the current block.
[0025] In the method and apparatus for decoding a video signal according to the present invention, when the information indicates that the inverse transform for the current block is omitted, at least one of a horizontal direction transform or a direction transform may be omitted. vertical according to a current block shape.
[0026] A method and apparatus for encoding a video signal according to the present invention can determine a transform set comprising a plurality of transform type candidates for a current block, determining a transform type of the current block from the plurality of candidates of type of transformation and make a transformation for the current block based on the type of transformation of the current block.
[0027] In the method and apparatus for encoding a video signal according to the present invention, the method and the apparatus can determine the residual value of quantization parameter for the current block, determine a quantization parameter for the current block based on the residual value of quantification parameter and perform a quantification for the current block based on the quantification parameter.
[0028] In the method and apparatus for encoding a video signal according to the present invention, the residual value of quantization parameter can be determined based on an average value related to the current block, and the average value can be determined based on the signal of prediction of the current block and a coefficient of CC generated as a result of the transformation.
[0029] In the method and apparatus for encoding a video signal according to the present invention, the residual value of quantization parameter can be determined by reference to a search table defining a mapping relationship between the average value and the residual value of quantization parameter, and the lookup table can be determined based on at least one of a size, an intra prediction mode, a transform type or a pixel value of the current block.
[0030] The features briefly summarized above for the present invention are only illustrative aspects of the detailed description of the invention which follows, but do not limit the scope of the invention.
[0031] Advantageous effects
[0032] According to the present invention, it is possible to improve the coding / decoding efficiency through the hierarchical / adaptive partitioning of a coding block.
[0033] According to the present invention, it is possible to improve the coding / decoding efficiency performing selectively a reverse transform for a target coding / decoding block.
[0034] According to the present invention, it is possible to improve the coding / decoding efficiency by adaptively determining a type of transform for a target coding / decoding block.
[0035] In accordance with the present invention, it is possible to improve the coding / decoding efficiency by deriving a quantizing residual value according to a characteristic of a coding / decoding target block.
[0036] The effects achievable by the present invention are not limited to the aforementioned effects, and other effects not mentioned can be understood clearly by those skilled in the art from the description below.
[0037] Description of the drawings
[0038] Figure 1 is a block diagram illustrating a device for encoding a video according to an embodiment of the present invention.
[0039] Figure 2 is a block diagram illustrating a device for decoding a video according to an embodiment of the present invention.
[0040] Figure 3 is a view illustrating an example of hierarchical partitioning of a coding block based on a tree structure according to an embodiment of the present invention.
[0041] Figure 4 is a view illustrating types of predefined intra prediction modes for a device for encoding / decoding a video according to an embodiment of the present invention.
[0042] Figure 5 is a flow chart briefly illustrating an intra prediction method according to an embodiment of the present invention.
[0043] Figure 6 is a view illustrating a method of correcting a prediction sample of a current block based on differential information from neighboring samples according to an embodiment of the present invention.
[0044] Figures 7 and 8 are views illustrating a method of correcting a prediction sample based on a predetermined correction filter according to an embodiment of the present invention.
[0045] Figure 9 is a view illustrating a method of correcting a prediction sample based on displacement in accordance with an embodiment of the present invention.
[0046] Figures 10 to 14 are views illustrating examples of an intra-prediction pattern of a current block according to an embodiment of the present invention.
[0047] Figure 15 is a view illustrating a method of performing prediction using an intra-block copy technique according to an embodiment of the present invention.
[0048] Figure 16 shows a range of reference samples for intra prediction according to an embodiment to which the present invention is applied.
[0049] Figures 17 to 19 illustrate an example of filtration in reference samples.
[0050] Figure 20 is a diagram showing an example in which a coding unit is divided into blocks that have a square shape.
[0051] Figure 21 is a diagram showing an example in which a coding unit is divided into blocks having a non-square shape.
[0052] Figure 22 is a flow diagram illustrating a process of realizing an inverse transform for a current block.
[0053] Figure 23 shows an example in which a transform set of a transform unit is determined based on an intra prediction mode of a prediction unit.
[0054] Figure 24 is a flow chart illustrating a method of deriving a quantization parameter difference value according to an embodiment of the present invention.
[0055] Mode for invention
[0056] A variety of modifications to the present invention can be made and there are various embodiments of the present invention, examples of which will be provided with reference to the drawings and described in detail. However, the present invention is not limited to the same, and exemplary embodiments can be considered to include all modifications, equivalents or substitutes in a technical concept and a technical scope of the present invention. Similar reference numbers refer to the similar element in the drawings described.
[0057] The terms used in the descriptive memory, 'first', 'second', etc., can be used to describe various components, but the components are not to be interpreted as being limited to these terms. The terms are to be used only to differentiate a component from other components. For example, the 'first' component can be named the 'second' component without departing from the scope of the present invention, and the 'second' component can also be similarly named the 'first' component. The term 'and / or' includes a combination of a plurality of elements and any one of a plurality of terms.
[0058] It will be understood that when an element is simply referred to as being 'connected to' or 'coupled to' another element without being 'directly connected to' or 'directly coupled to' another element in the present description, it can be 'directly connected to'. or 'directly coupled to' another element or being connected to or coupled to another element, which has the other intermediate element between them. In contrast, it should be understood that when an element is called as being "directly coupled" or "directly connected" to another element, there are no intervening elements present.
[0059] The terms used in the present specification are used merely to describe particular embodiments, and are not intended to limit the present invention. An expression used in the singular covers the expression of the plural, unless it clearly means different in the context. In the present specification, it is to be understood that the terms such as "including", "having", etc., are intended to indicate the existence of the characteristics, numbers, steps, actions, elements, parts, or combinations of the same disclosed in the specification, and is not intended to exclude the possibility that one or more other characteristics, numbers, steps, actions, elements, parts or combinations thereof may exist or be added.
[0060] In the following, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings. In the following, the same constituent elements in the drawings are indicated by the same reference numbers, and a repeated description of the same elements will be omitted.
[0061] Figure 1 is a block diagram illustrating a device for encoding a video according to an embodiment of the present invention.
[0062] Referring to Figure 1, the device for encoding a video 100 may include: an instantaneous partitioning module 110, prediction modules 120 and 125, a transform module 130, a quantization module 135, a reorganization module 160, a module 165 for entropy coding, a reverse quantization module 140, a reverse transform module 145, a filter module 150 and a memory 155. Additionally, numerical references in Figure 1 denote the following: 101 - Actual Instantaneous; 102 - Target Prediction Block; 103 - Residual Block; 104 - Reconstructed Residual Block; 105 - NAL.
[0063] The constitutional parts shown in Figure 1 are shown independently to represent characteristic functions different from one another in the device for encoding a video. Therefore, it does not mean that each constitutional part is constituted in a separate constitutional hardware or software unit. In other words, each constitutional part includes each constitutional part listed by convenience. Therefore, at least two constitutional parts of each constitutional part can be combined to form a constitutional part or a constitutional part can be divided into a plurality of constitutional parts to perform each function. The embodiment where each constitutional part is combined with the embodiment where a constitutional part is divided are also included in the scope of the present invention, if they do not depart from the essence of the present invention.
[0064] Also, some of the constituents may not be indispensable constituents that perform essential functions of the present invention but be selective constituents that only improve the performance thereof. The present invention can be implemented by including only the essential constitutional parts to implement the essence of the present invention except the constituents used in improving performance. The structure that includes only the constituents indispensable except the selective constituents used to improve performance only is also included in the scope of the present invention.
[0065] The instantaneous partitioning module 110 can partition an entry snap into one or more processing units. At this point, the processing unit may be a prediction unit (PU), a transform unit (TU), or a coding unit (CU). The instantaneous partitioning module 110 can partition an instantaneous into combinations of multiple coding units, prediction units and transform units, and can encode an instantaneous by selecting a combination of coding units, prediction units, and transform units with a default criteria (for example, cost function).
[0066] For example, an instantaneous can be partitioned into multiple coding units. A recursive tree structure, such as a quadruple tree structure, can be used to partition an instantaneous into coding units. A coding unit that is partitioned into other coding units with an instantaneous or a larger coding unit such as a root can be partitioned with child nodes corresponding to the number of partitioned coding units. A coding unit that is no longer partitioned by a given limitation serves as a leaf node. That is, when it is assumed that only square partitioning is possible for a coding unit, one coding unit can be partitioned into four other coding units as a maximum.
[0067] Hereinafter, in the embodiment of the present invention, the coding unit can mean a unit that performs coding or a unit that performs decoding.
[0068] A prediction unit may be one of partitioned partitions in a square or rectangular shape having the same size in a single coding unit, or a prediction unit may be one of partitioned partitions to have a different shape and / or size in a single coding unit.
[0069] When a prediction unit subjected to intra prediction is generated based on a coding unit and the coding unit is not the smallest coding unit, intra prediction can be performed without partitioning the coding unit into multiple prediction units NxN.
[0070] The prediction modules 120 and 125 may include an inter prediction module 120 which performs inter prediction and an intra prediction module 125 which performs intra prediction. It can be determined whether to perform inter prediction or intra prediction for the prediction unit, and detailed information (eg, an intra prediction mode, a motion vector, a reference snapshot, etc.) can be determined according to each prediction method. . At this point, the processing unit subject to prediction may be different from the processing unit for which the prediction method and detailed content is determined. For example, the prediction method, the prediction mode, etc., can be determined by the prediction unit, and the prediction can be made by the transform unit. A residual value (residual block) between the generated prediction block and an original block can be introduced into the transform module 130. Also, prediction mode information, motion vector information, etc., used for prediction can be encoded with the residual value by the entropha coding module 165 and can be transmitted to a device to decode a video. When using a particular coding mode, it is possible to transmit to a device to decode video by encoding the current block as it is without generating the prediction block through the prediction modules 120 and 125.
[0071] The inter-prediction module 120 can predict the prediction unit based on information from at least one of a previous instantaneous or a subsequent instantaneous of the current instantaneous, or it can predict the prediction unit based on information from some encoded regions in the current instantaneous , in some cases. The inter-prediction module 120 may include an interpolation reference snapshot module, a motion prediction module and a motion compensation module.
[0072] The reference instantaneous interpolation module may receive reference snapshot information from the memory 155 and may generate pixel information of a pixel whole or less than the entire pixel from the reference snap. In the case of luminance pixels, an interpolation filter based on 8-lead DCT having different filter coefficients can be used to generate pixel information of a whole pixel or less than an integer pixel in units of a 1/4 pixel. . In the case of chrominance signals, an interpolation filter based on 4-lead DCT having a different filter coefficient can be used to generate pixel information of a whole pixel or less than an entire pixel in units of a 1/8 pixel. .
[0073] The motion prediction module can perform motion prediction based on the reference snapshot interpolated by the interpolation reference instantaneous module. As methods for calculating a motion vector, various methods can be used, such as block matching algorithm based on full search (FBMA), a three stage search (TSS), a new three-stage search algorithm (NTS), etc. The motion vector can have a motion vector value in units of a 1/2 of a pixel or a 1/4 of a pixel based on an interpolated pixel. The motion prediction module can predict a current prediction unit by changing the motion prediction method. As methods of motion prediction, various methods can be used, such as a jump method, a joining method, an AMVP (Advanced Motion Vector Prediction) method, a copy method of intra block, etc.
[0074] The intra prediction module 125 can generate a prediction unit based on reference pixel information that is neighbor to a current block that is pixel information in the current instantaneous. When the neighboring block of the current prediction unit is a block subjected to inter-prediction and therefore a reference pixel is a pixel subjected to inter-prediction, the reference pixel included in the block subjected to inter-prediction can be replaced by information of reference pixel of a neighboring block subjected to intra prediction. That is, when a reference pixel is not available, at least one reference pixel of available reference pixels can be used in place of non-available reference pixel information.
[0075] Prediction modes in intra prediction can include a directional prediction mode that uses reference pixel information that depends on a prediction direction and a non-directional prediction mode that does not use directional information when predicting. A mode for predicting luminance information may be different from a mode for predicting chrominance information, and for predicting chrominance information, intra prediction mode information used to predict luminance information or predicted luminance signal information may be used.
[0076] When performing intra prediction, when the size of the prediction unit is the same as the size of the transform unit, you can perform intra prediction in the prediction unit based on pixels located on the left, top left and above the prediction unit. However, when performing intra prediction, when the size of the prediction unit is different from the size of the transform unit, intra prediction can be performed using a reference pixel based on the transform unit. Also, intra prediction can be used using N x N partitioning for only the smallest coding unit.
[0077] In the intra prediction method, a prediction block can be generated after applying an AIS filter (Adaptive Intra Smoothing - Adaptive Intra Smoothing) to a reference pixel that depends on the prediction modes. The type of the AIS filter applied to the reference pixel may vary. To perform the intra prediction method, an intra prediction mode of the current prediction unit can be predicted from the intra prediction mode of the prediction unit that is neighbor to the current prediction unit. In predicting the prediction mode of the current prediction unit using predicted mode information from the neighboring prediction unit, when the intra prediction mode of the current prediction unit is the same as the intra prediction mode of the prediction unit. neighboring prediction, the information indicating that the prediction modes of the current prediction unit and the prediction unit are equal to each other can be transmitted using predetermined flag information. When the prediction mode of the current prediction unit is different from the prediction mode of the neighboring prediction unit, entropy coding can be performed to encode prediction mode information of the current block.
[0078] Also, a residual block may be generated which includes information on a residual value which is a different one between the prediction unit under prediction and the original block of the prediction unit based on prediction units generated by the prediction modules 120 and 125. The residual block generated can be introduced into the transform module 130.
[0079] The transform module 130 can transform the residual block including information about the residual value between the original block and the prediction units generated by the prediction modules 120 and 125 using a transform method, such as a discrete cosine transform (DCT). ), discrete sine transform (DST) and KLT. Whether to apply DCT, DST or KLT to transform the residual block can be determined based on information of the intra prediction mode of the prediction unit used to generate the residual block.
[0080] The quantization module 135 can quantify values transformed to a frequency domain by the transform module 130. The quantification coefficients may vary depending on the block or the importance of an instantaneous one. The values calculated by the quantization module 135 can be provided to the inverse quantization module 140 and the reorganization module 160.
[0081] Reorganization module 160 can rearrange coefficients of quantized residual values. The reorganization module 160 can change a coefficient in the form of a two-dimensional block into a coefficient in the form of a one-dimensional vector through a method of coefficient scanning. For example, the reorganization module 160 can scan from a DC coefficient to a coefficient in a high frequency domain using a zigzag scanning method to change the coefficients to be in the form of one-dimensional vectors. Depending on the size of the transform unit and the intra prediction mode, the vertical direction scan can be used where the coefficients in the form of two-dimensional blocks are scanned in the column direction or the horizontal direction scan where the block coefficients Two-dimensional images are scanned in row direction instead of zigzag scanning. That is, what method of exploration between zigzag exploration, vertical direction scan and horizontal direction scan is used can be determined depending on the size of the transform unit and the intra prediction mode. The entropha coding module 165 can perform entropy coding based on the values calculated by the reorganization module 160. Entropha coding can use various coding methods, for example, Golomb exponential coding, adaptive variable length coding according to context (CAVLC), and adaptive arithmetic binary coding according to context (CABAC). Entropha coding module 165 may encode a variety of information, such as residual value coefficient information and block type information of the coding unit, prediction mode information, partition unit information, unit information prediction, transform unit information, motion vector information, reference frame information, block interpolation information, filtering information, etc., from the reorganization module 160 and the prediction modules 120 and 125.
[0082] The entropy coding module 165 can encode by entropy the coefficients of the coding unit introduced from the reorganization module 160.
[0083] The inverse quantization module 140 can quantize the values quantized by the quantization module 135 in reverse and the inverse transform module 145 can transform the transformed values by the transform module 130 in reverse. The residual value generated by the inverse quantization module 140 and the inverse transform module 145 can be combined with the prediction unit provided by a motion estimation module, a motion compensation module, and the intra prediction module of the modules. 120 and 125 of prediction so that a reconstructed block can be generated.
[0084] The filter module 150 may include at least one of an unlock filter, a shift correction unit and an adaptive loop filter (ALF).
[0085] The unblocking filter can eliminate the block distortion that occurs due to the boundaries between the blocks in the reconstructed snapshot. To determine whether to perform the unlock, the pixels included in several rows or columns in the block can be a basis for determining whether to apply the unlock filter to the current block. When the unblocking filter is applied to the block, an intense filter or a weak filter may be applied depending on the required unlocking filtering intensity. Also, when applying the unblocking filter, the filtering of the horizontal direction and the filtering of the vertical direction can be processed in parallel.
[0086] The offset correction module can correct the displacement with the original snapshot in units of a pixel in the instantaneous snapshot. To perform the offset correction in a particular instantaneous, it is possible to use a displacement application method taking into account edge information of each pixel or a pixel partitioning method of a snapshot in the predetermined number of regions, determine a region for that is submitted to make displacement, and apply the displacement to the determined region.
[0087] Adaptive loop filtering (ALF) can be performed based on the value obtained by comparing the filtered and the original snapshot instantaneous. The pixels included in the snapshot can be divided into predetermined groups, a filter can be determined to be applied to each of the groups, and filtering can be done individually for each group. The information on whether to apply ALF and a luminance signal can be transmitted by coding units (CU). The filter shape and coefficient of a filter for ALF can vary depending on each block. Also, the filter for ALF in the same form (fixed form) can be applied independently of the characteristics of the target application block.
[0088] The memory 155 can store the reconstructed or instantaneous block calculated through the filter module 150. The stored reconstructed block or instantaneous can be provided to the prediction modules 120 and 125 when performing inter prediction.
[0089] Figure 2 is a block diagram illustrating a device for decoding a video according to an embodiment of the present invention.
[0090] Referring to Figure 2, the device 200 for decoding a video may include: a module 210 for decoding by entropy, a module 215 for reorganization, a module 220 for inverse quantization, a module 225 for inverse transformation, modules 230 and 235 for prediction, a filter module 240, and a memory 245. Additionally, the numeral reference 201 in Figure 2 denotes the NAL unit.
[0091] When a video bitstream is input from the device to encode a video, the input bit stream can be decoded according to a reverse process of the device to encode a video. The entropha decoding module 210 can perform entropha decoding according to an inverse entropy coding process by the entropy coding module of the device to encode a video. For example, in correspondence to the methods performed by the device to encode a video, various methods can be applied, such as exponential Golomb coding, adaptive variable length coding according to context (in English "context-adaptive variable length coding" - CAVLC) and adaptive arithmetic binary coding according to context (in English "context-adaptive binary arithmetic coding" - CABAC).
[0092] Entropha decoding module 210 can decode information in intra prediction and inter prediction performed by the device to encode a video.
[0093] The reorganization module 215 can perform reorganization in the entropha decoded bitstream by the entropha decoding module 210 based on the reorganization method used in the device to encode a video. The reorganization module can reconstruct and reorganize the coefficients in the form of one-dimensional vectors for the coefficient in the form of two-dimensional blocks. The reorganization module 215 may receive information related to coefficient scanning performed on the device to encode a video and may perform reorganization by a scanning method in the reverse of the coefficients based on the scanning order performed on the device to encode a video .
[0094] The reverse quantization module 220 can perform inverse quantization based on a quantization parameter received from the device to encode a video and the reorganized coefficients of the block.
[0095] The inverse transform module 225 can perform the inverse transform, inverse DCT, inverse DST and inverse KLT, which is the inverse process of the transform, ie, DCT, DST and KLT, performed by the transform module in the quantization result by the device to encode a video. The inverse transform can be performed based on the transform unit determined by the device to encode a video. The reverse transform module 225 of the device for decoding a video can selectively perform transform schemes (eg, DCT, DST and KLT) depending on multiple pieces of information, such as the prediction method, the size of the current block, the prediction direction, etc.
[0096] The prediction modules 230 and 235 can generate a prediction block based on information on the prediction block generation received from the entropha decoding module 210 and the previously decoded block or instantaneous information received from the memory 245. As has been described above, as the operation of the device for encoding a video, when performing intra prediction, when the size of the prediction unit is the same as the size of the transform unit, intra prediction can be made in the prediction unit based on the pixels located on the left, top left and above the prediction unit. When performing intra prediction, when the size of the prediction unit is different from the size of the transform unit, intra prediction can be performed using a reference pixel based on the transform unit. Also, intra prediction can be used using N x N partitioning for only the smallest coding unit.
[0097] The prediction modules 230 and 235 may include a prediction unit determination module, an inter prediction module and an intra prediction module. The prediction unit determination module may receive a variety of information, such as prediction unit information, prediction mode information of an intra prediction method, information on prediction of movement of an inter prediction method, etc., from the entropha decoding module 210, can divide a current coding unit into prediction units, and can determine whether inter prediction or intra prediction is performed in the prediction unit. Using information required in inter prediction of the current prediction unit received from the device to encode a video, the inter-prediction module 230 can perform inter-prediction in the current prediction unit based on information from at least one of a previous instantaneous or a instant instantaneous of the current instantaneous that includes the current prediction unit. As an alternative, inter prediction can be performed based on information from some pre-rebuilt regions in the current instantaneous that include the current prediction unit.
[0098] To perform inter-prediction, it can be determined for the coding unit which of a hop mode, a join mode, an AMVP mode, and an inter block copy mode is used as the motion prediction method of the unit. prediction included in the coding unit.
[0099] The intra-prediction module 235 can generate a prediction block based on pixel information in the current instantaneous. When the prediction unit is a prediction unit subjected to intra prediction, intra prediction can be performed based on information of the intra prediction mode of the prediction unit received from the device to encode a video. The intra-prediction module 235 may include an adaptive intra-smoothing filter (-AIS), a reference pixel interpolation module, and a DC filter. The AIS filter performs filtering on the reference pixel of the current block, and whether applying the filter can be determined depending on the prediction mode of the current prediction unit. The AIS filtering can be performed on the reference pixel of the current block using the prediction mode of the filter AIS information and prediction unit received from the device to encode a video. When the prediction mode of the current block is a mode where AIS filtering is not performed, the AIS filter may not be applied.
[0100] When the prediction mode of the prediction unit is a prediction mode in which intra prediction is performed based on the pixel value obtained by interpolating the reference pixel, the reference pixel interpolation module can interpolate the reference pixel to generate the reference pixel of a whole pixel or smaller than a whole pixel. When the prediction mode of the current prediction unit is a prediction mode in which a prediction block is generated without interpolation of the reference pixel, the reference pixel may not be interpolated. The DC filter can generate a prediction block through filtering when the prediction mode of the current block is a CC mode.
[0101] The reconstructed block or instantaneous can be provided to the filter module 240. The filter module 240 may include the unlock filter, the offset correction module and the ALF.
[0102] The information on whether or not the unlock filter is applied to the corresponding block or instantaneous and information about which of an intense filter and a weak filter is applied when the unlock filter is applied can be received from the device to encode a video. The unlock filter of the device for decoding a video can receive information about the unlock filter from the device to encode a video, and can perform unlock filtering in the corresponding block. The offset correction module may perform offset correction on the reconstructed snapshot based on the type of offset correction and offset value information applied to an instant when performing the encoding.
[0103] The ALF can be applied to the coding unit based on information on whether to apply the ALF, ALF coefficient information, etc., received from the device to encode a video. The ALF information can be provided as being included in a particular set of parameters. The memory 245 can store the instantaneous or reconstructed block for use as an instantaneous or reference block and can provide the reconstructed instantaneous to an output module.
[0104] As described above, in the embodiment of the present invention, for convenience of explanation, the coding unit is used as a term representing a unit for coding, but the coding unit can serve as a decoding unit as well as a decoding unit. coding.
[0105] Figure 3 is a view illustrating an example of hierarchical partitioning of a coding block based on a tree structure according to an embodiment of the present invention. Numerical references in Figure 3 denote the following: 301 - Blocks with depth k; 302 - Blocks with depth k + 1; 303 - Blocks with depth k + 2.
[0106] An input video signal is decoded in predetermined block units. A default unit of this type to decode the input video signal is an encoding block. The coding block can be a unit that performs intra / inter prediction, transformation and quantification. The coding block can be a square or non-square block having an arbitrary size in a range of 8x8 to 64x64, or it can be a square or non-square block having a size of 128x128, 256x256 or greater.
[0107] Specifically, the coding block can be partitioned hierarchically based on at least one of a quadruple tree and a binary tree. At this point, quadruple tree-based partitioning can mean that a 2Nx2N coding block is partitioned into four NxN coding blocks, and binary tree-based partitioning can mean that one block of coding is partitioned into two blocks of coding . Partitioning based on binary tree can be performed symmetrically or asymmetrically. The partitioned coding block based on the binary tree can be a square block or a non-square block, such as a rectangular shape. Partitioning based on a binary tree can be done in a coding block where quadruple tree-based partitioning is no longer performed. Quadruple tree-based partitioning can no longer be performed on the partitioned coding block based on the binary tree.
[0108] To implement adaptive partitioning based on the quadruple tree or binary tree, information indicating quadruple tree-based partitioning can be used, information about the size / depth of the coding block that quadruple tree-based partitioning is allowed, information indicating partitioning based on binary tree, information about the size / depth of the coding block that partitioning is allowed based on binary tree, information about the size / depth of the coding block that partitioning based on binary tree is not allowed, information about whether partitioning is performed in binary tree in a vertical direction or a horizontal direction, etc.
[0109] As shown in Figure 3, the first block 300 coding with the partition depth of division can be partitioned into multiple second blocks of coding based on the quadruple tree. For example, the second coding blocks 310 to 340 may be square blocks that are half the width and half the height of the first coding block, and the partition depth of the second coding block may be increased to k + 1. The second block 310 of coding with the partition depth of k + 1 can be partitioned into multiple third coding blocks with the partition depth of k + 2. The partitioning of the second coding block 310 can be done using selectively one of the quadruple tree and the binary tree depending on a partitioning method. At this point, the partitioning method can be determined based on at least one of the information indicating quadruple tree-based partitioning and the information indicating binary tree-based partitioning.
[0110] When the second coding block 310 is partitioned based on the quadruple tree, the second coding block 310 can be partitioned into four third coding blocks 310a having half the width and half the height of the second coding block, and the depth The partition of the third block 310a of coding can be increased to k + 2. In contrast, when the second coding block 310 is partitioned based on the binary tree, the second coding block 310 can be partitioned into two third coding blocks. At this point, each of the two third coding blocks can be a non-square block having one half the width and half the height of the second coding block, and the partition depth can be increased to k + 2. The second block of coding can be determined as a non-square block of a horizontal direction or a vertical direction that depends on a partitioning direction, and the partitioning address can be determined based on the information on whether binary tree partitioning is performed on a vertical direction or a horizontal direction.
[0111] Meanwhile, the second coding block 310 can be determined as a sheet coding block that is no longer partitioned based on the quadruple tree or the binary tree. In this case, the sheet coding block can be used as a prediction block or a transform block.
[0112] Similar partitioning of the second coding block 310, the third coding block 310a can be determined as a leaf coding block, or it can be further partitioned based on the quadruple tree or the binary tree.
[0113] Meanwhile, the third block 310b of partitioned coding based on the binary tree can be further partitioned into blocks 310b-2 for coding a vertical direction or blocks 310b 3 of coding a horizontal direction based on the binary tree, and the partition depth of the relevant blocks of coding can be increased to k + 3. Alternatively, the third coding block 310b can be determined as a leaf coding block 310b-1 that is no longer partitioned based on the binary tree. In this case, the coding block 310b-1 can be used as a prediction block or a transform block. However, the above partitioning process can be performed in a limited manner based on at least one of the information on the size / depth of the coding block that quadruple tree-based partitioning is allowed, the information on the size / depth of the block coding that binary tree-based partitioning is allowed, and information about the size / depth of the coding block that binary tree-based partitioning is not allowed.
[0114] Figure 4 is a view illustrating types of predefined intra prediction modes for a device for encoding / decoding a video according to an embodiment of the present invention.
[0115] The device for encoding / decoding a video can perform intra prediction using one of predefined intra prediction modes. Predefined intra prediction modes for intra prediction can include non-directional prediction modes (eg, a planar mode (mode 0), a CC mode (mode 1)) and 33 directional prediction modes.
[0116] Alternatively, to improve intra-prediction accuracy, a greater number of directional prediction modes can be used than the 33 prediction modes. That is, M can be defined with extended directional prediction modes by subdividing angles of the directional prediction modes (M> 33), and a directional prediction mode having a predetermined angle can be derived using at least one of the 33 predefined directional prediction modes .
[0117] Figure 4 shows an example of extended intra-prediction modes, and extended intra-prediction modes can include two non-directional prediction modes and 65 extended directional prediction modes. The same numbers of the extended intra prediction modes can be used for a luminance component and a chrominance component, or a different number of the intra prediction modes can be used for each component. For example, 67 extended intra prediction modes can be used for the luminance component, and 35 intra-prediction modes can be used for the chrominance component.
[0118] Alternatively, depending on the chrominance format, a different number of intra prediction modes can be used when performing intra prediction. For example, in the case of the 4: 2: 0 format, 67 intra-prediction modes can be used for the luminance component to perform intra-prediction and 35 intra-prediction modes can be used for the chrominance component. In the case of the 4: 4: 4 format, 67 intra-prediction modes can be used so that both the luminance component and the chrominance component perform intra prediction.
[0119] Alternatively, depending on the size and / or shape of the block, a different number of intra prediction modes can be used to perform intra prediction. That is, depending on the size and / or shape of the PU or CU, 35 intra-prediction modes or 67 intra-prediction modes can be used to perform intra-prediction. For example, when the CU or PU has the size smaller than 64x64 or asymmetrically partitioned, 35 intra prediction modes can be used to perform intra prediction. When the size of the CU or PU is greater than or equal to 64x64, 67 intra prediction modes can be used to perform intra prediction. 65 intra-directional prediction modes can be allowed for Intra_2Nx2N, and only 35 intra-directional prediction modes can be allowed for Intra_NxN.
[0120] Figure 5 is a flow chart briefly illustrating an intra prediction method according to an embodiment of the present invention.
[0121] Referring to Figure 5, an intra prediction mode of the current block can be determined in step S500.
[0122] Specifically, the intra prediction mode of the current block can be derived based on a candidate list and an index. At this point, the candidate list contains multiple candidates, and the multiple candidates can be determined based on an intra prediction mode of the neighboring block adjacent to the current block. The neighboring block can include at least one of the blocks located at the top, the bottom, the left, the right and the corner of the current block. The index can specify one of the multiple candidates from the list of candidates. The candidate specified by the index can be set to the intra prediction mode of the current block.
[0123] An intra prediction mode used for intra prediction in the neighboring block can be established as a candidate. Also, an intra prediction mode that has similar directionality to that of the intra-prediction mode of the neighboring block can be established as a candidate. At this point, the intra mode The prediction that has similar directionality can be determined by adding or subtracting a predetermined constant value to or from the intra prediction mode of the neighboring block. The predetermined constant value can be an integer number, such as one, two or greater.
[0124] The candidate list can additionally include a default mode. The default mode can include at least one of a planar mode, a CC mode, a vertical mode and a horizontal mode. The default mode can be added adaptively considering the maximum number of candidates that can be included in the list of candidates of the current block.
[0125] The maximum number of candidates that can be included in the list of candidates can be three, four, five, six or more. The maximum number of candidates that can be included in the candidate list can be a fixed value present in the device to encode / decode a video, or it can be determined in a variable manner based on a characteristic of the current block. The characteristic can mean the location / size / shape of the block, the number / type of the intra prediction modes that the block can use, etc. As an alternative, the information indicating the maximum number of candidates that can be included in the candidate list can be signaled separately, and the maximum number of candidates that can be included in the candidate list can be determined in a variable way using the information. The information indicating the maximum number of candidates can be signaled in at least one of a sequence level, an instantaneous level, a cut level and a block level. When the extended intra-prediction modes and the predefined intra-prediction modes are used selectively, the intra-prediction modes of the neighboring blocks can be transformed into indices corresponding to the extended intra-prediction modes or indices corresponding to the 35 intra-prediction modes, through which candidates can be derived. To transform an index, a predefined table can be used, or a scale change operation based on a predetermined value. At this point, the predefined table can define a mapping relationship between different groups of intra prediction modes (for example, extended intra prediction modes and 35 intra prediction modes).
[0126] For example, when the left neighbor block uses the 35 intra prediction modes and the intra prediction mode of the left neighbor block is 10 (a horizontal mode), it can be transformed into an index of 16 corresponding to a horizontal mode in the modes of intra prediction extended.
[0127] Alternatively, when the upper neighbor block uses the extended intra prediction modes and the intra prediction mode of the neighbor block has an index of 50 (a vertical mode), it can be transformed into an index of 26 corresponding to a vertical mode in the 35 intra prediction modes.
[0128] Based on the above-described method of determining the intra-prediction mode, the intra-prediction mode can be derived independently for each of the luminance component and the chrominance component, or the intra-prediction mode of the chrominance component can be derived depending on the mode of intra prediction of the luminance component.
[0129] Specifically, the intra prediction mode of the chrominance component can be determined based on the intra prediction mode of the luminance component as shown in the following table 1.
[0130] [Table 1]
[0131]
[0132]
[0133]
[0134]
[0135] In Table 1, intra_chroma_pred_mode means information signaled to specify the intra prediction mode of the chrominance component, and IntraPredModeY indicates the intra prediction mode of the luminance component.
[0136] Referring to Figure 5, a reference sample for intra prediction of the current block can be derived in step S510.
[0137] Specifically, a reference sample for intra prediction can be derived based on a neighbor sample of the current block. The neighbor sample may be a reconstructed sample of the neighbor block, and the reconstructed sample may be a reconstructed sample before a loop filter or reconstructed sample is applied after the loop filter is applied.
[0138] A neighboring sample reconstructed before the current block can be used as the reference sample, and a neighbor sample filtered based on a predetermined intra filter can be used as the reference sample. The intra-filter may include at least one of the first intra-filter applied to multiple neighboring samples located on the same horizontal line and the second intra-filter applied to multiple neighboring samples located on the same vertical line. Depending on the positions of the neighboring samples, one of the first intra-filter and the second intra-filter can be applied selectively, or both intra-filters can be applied.
[0139] The filtering can be performed adaptively based on at least one of the intra prediction mode of the current block and the size of the transform block for the current block. For example, when the intra-prediction mode of the current block is CC mode, vertical mode, or horizontal mode, filtering can not be performed. When the size of the transform block is NxM, filtering can not be performed. At this point, N and M can be the same values or different values, or they can be values of 4, 8, 16 or more. Alternatively, the filtering can be done selectively based on the result of a predefined threshold comparison and the difference between the intra prediction mode of the current block and the vertical mode (or the horizontal mode). For example, when the difference between the intra prediction mode of the current block and the vertical mode is greater than a threshold, filtering can be performed. The threshold can be defined for each size of the transform block as shown in Table 2.
[0140] [Table 2]
[0141]
[0142]
[0143]
[0144]
[0145] The intra-filter can be determined as one of multiple intra-filter candidates predefined in the device to encode / decode a video. For this purpose, an index specifying an intra-filter of the current block between the multiple intra-candidate filters can be signaled. Alternatively, the intra filter may be determined based on at least one of the size / shape of the current block, the size / shape of the transform block, information on the filter intensity and variations of the neighboring samples. Referring to Figure 5, intra prediction can be performed using the intra prediction mode of the current block and the reference sample in step S520.
[0146] That is, the prediction sample of the current block can be obtained using the intra prediction mode determined in step S500 and the reference sample derived in step S510. However, in the case of intra prediction, a boundary sample of the neighboring block can be used, and therefore the quality of the prediction instantaneous can be reduced. Therefore, a correction process can be performed on the prediction sample generated through the prediction process described above, and will be described in detail with reference to Figures 6 to 14. However, the correction process is not limited to only one inter prediction sample is applied, and it can be applied to an inter prediction sample or to the reconstructed sample.
[0147] Figure 6 is a view illustrating a method of correcting a prediction sample of a current block based on differential information from neighboring samples according to an embodiment of the present invention. The numerical references in Figure 6 denote the following: 601 - neighbor sample, 602 - current sample.
[0148] The prediction sample of the current block can be corrected based on the differential information of multiple neighboring samples for the current block. The correction can be made in all the prediction samples in the current block, or it can be done in prediction samples in some predetermined regions. Some regions may be a row / column or multiple rows / columns, or they may be preset regions for correction in the device to encode / decode a video, or they may be determined in a variable manner based on at least one of the size / shape of the current block and the intra prediction mode.
[0149] The neighboring samples can belong to the neighboring blocks located in the upper part, the left and the upper left corner of the current block. The number of neighboring samples used for correction can be two, three, four or more. The positions of the neighboring samples can be determined in a variable manner depending on the position of the prediction sample which is the correction target in the current block. Alternatively, some of the neighboring samples may have fixed positions independently of the position of the prediction sample that is the correction target, and the remaining neighboring samples may have positions that depend in varying ways on the position of the prediction sample that It is the goal of correction.
[0150] The differential information of the neighbor samples may mean a differential sample between neighboring samples, or may mean a value obtained by scaling the differential sample by a predetermined constant value (eg, one, two, three, etc.). At this point, the predetermined constant value can be determined by considering the position of the prediction sample which is the correction target, the position of the column or row that includes the prediction sample that is the correction target, the position of the sample of prediction within the column or row, etc.
[0151] For example, when the intra-prediction mode of the current block is the vertical mode, differential samples can be used between the upper left neighbor sample p (-1, -1) and neighboring samples p (-1, y) adjacent to the limit of the block current to obtain the final prediction sample as shown in Equation 1.
[0152] [Equation 1]
[0153] P '(0, jj;) = P (0,>') + ((p (-l,>') - / (- l - l))>> l for _y = 0..JV-l For example, when the intra-prediction mode of the current block is the horizontal mode, differential samples can be used between the upper left neighbor sample p (-1, -1) and neighboring samples p (x, -1) adjacent to the upper limit of the Current block to obtain the final prediction sample as shown in Equation 2.
[0154] [Equation 2]
[0155]
[0156] For example, when the intra-prediction mode of the current block is vertical mode, differential samples can be used between the upper left neighbor sample p (-1, -1) and neighboring samples p (-1, y) adjacent to the left end of the block. current block to obtain the final prediction sample. At this point, the differential sample can be added to the prediction sample, or the differential sample can be scaled to a predetermined constant value, and then added to the prediction sample. The default constant value used when changing scales can be determined differently depending on the column and / or row. For example, the prediction sample can be corrected as shown in Equation 3 and Equation 4.
[0157] [Equation 3]
[0158]
[0159]
[0160] For example, when the intra-prediction mode of the current block is horizontal mode, differential samples can be used between the left-neighbor sample p (-1, -1) and neighboring samples p (x, -1) adjacent to the upper limit of the block current to obtain the final prediction sample, as described in the case of vertical mode. For example, the prediction sample can be corrected as shown in Equation 5 and Equation 6.
[0161] [Equation 5]
[0162]
[0163] P '(x, 0) = p (x, 0) + W x - and P {- - )) » for * = O..JV-l [Equation 6]
[0164]
[0165] Figures 7 and 8 are views illustrating a method of correcting a prediction sample based on a predetermined correction filter according to an embodiment of the present invention.
[0166] The prediction sample can be corrected based on the neighbor sample of the prediction sample which is the correction target and a predetermined correction filter. At this point, the neighboring sample can be specified by an angular line of the directional prediction mode of the current block, or it can be at least one sample located on the same angular line as the prediction sample which is the correction target. Also, the neighbor sample can be a prediction sample in the current block, or it can be a reconstructed sample in a neighboring block reconstructed before the current block. At least one of the number of derivations, intensity and a filter coefficient of the correction filter can be determined based on at least one of the position of the prediction sample that is the correction target, if the prediction sample that is the objective of correction is located or not in the limit of the current block, the intra prediction mode of the current block, angle of the directional prediction mode, the prediction mode (inter or intra mode) of the neighboring block, and the size / shape of the current block .
[0167] Referring to Figure 7, when the directional prediction mode has an index of 2 or 34, at least one predicted / reconstructed sample located in the lower left of the prediction sample which is the correction target and the Default correction can be used to obtain the final prediction sample. At this point, the predicted / reconstructed sample in the lower left can belong to a previous line of a line that includes the prediction sample that is the correction target. The predicted / reconstructed sample in the lower left part can belong to the same block as the current sample, or neighbor to the block adjacent to the current block. The filtering for the prediction sample can be done only on the line located at the block limit, or it can be done on multiple lines. The correction filter can be used where at least one of the number of filter derivations and a filter coefficient is different from each of the lines. For example, a filter (1/2, 1/2) can be used for the first left line closest to the block limit, a filter (12/16, 4/16) can be used for the second line, a filter can be used (14/16, 2/16) for the third line and a filter (15/16, 1/16) can be used for the fourth line.
[0168] Alternatively, when the directional prediction mode has an index of 3 to 6 or 30 to 33, the filtering can be performed in the block limit as shown in Figure 8, and a 3-derivation correction filter can be used to correct the prediction sample. The filtering can be done using the sample from the bottom left of the prediction sample that is the correction target, the sample from the bottom of the sample from the bottom left, and a 3-derivation correction filter that takes as input the prediction sample that is the goal of correction. The position of the neighbor sample used by the correction filter can be determined differently based on the directional prediction mode. The filter coefficient of the correction filter can be determined differently depending on the directional prediction mode.
[0169] Different correction filters may be applied depending on whether the neighboring block is encoded in the inter mode or the intra mode. When the neighboring block is encoded in the intra mode, a filtering method can be used where more weight is given to the prediction sample, as compared to when the neighboring block is encoded in the inter mode. For example, in the case that the intra prediction mode is 34, when the neighbor block is coded in the inter mode, a filter (1/2, 1/2) can be used, and when the neighboring block is coded in the intra mode, a filter can be used (4/16, 12/16).
[0170] The number of lines to be filtered in the current block may vary depending on the size / shape of the current block (for example, the block of coding or the block of prediction). For example, when the size of the current block is equal to or less than 32x32, filtering can be done in only one line at the block limit; otherwise, filtering can be performed on multiple lines including the line at the block boundary.
[0171] Figures 7 and 8 are based on the case where the intra prediction modes are used in Figure 4, but can be applied in an equal / similar manner to the case where the extended intra prediction modes are used.
[0172] When performing intra prediction in a current block based on an intra prediction mode, a generated prediction sample may not reflect the characteristics of an original snapshot since a range of the reference samples being used is limited (eg , intra prediction is performed only using neighboring samples adjacent to the current block). For example, when there is an edge in a current block or when a new object appears around a limit of the current block, a difference between a prediction sample and an original snapshot can be large depending on a position of a prediction sample in the current block. In this case, a residual value is relatively large, and therefore the number of bits to be encoded / decoded may increase. Particularly, a residual value in a region relatively far from a limit of the current block may include a large number of high frequency components, which may result in degradation of the coding / decoding efficiency.
[0173] To solve the above problems, a method of generating or updating a prediction sample in subblock units can be used. Accordingly, the prediction accuracy can be improved in a region relatively far from a block boundary.
[0174] For convenience of explanation, in the following embodiments, a prediction sample generated based on a directional intra-prediction mode is referred to as a first prediction sample. Also, a prediction sample generated based on a non-directional intra-prediction mode or a prediction sample generated by performing inter-prediction can also be included in a category of the first prediction sample.
[0175] A method of correcting the prediction sample based on displacement will be described in detail with reference to Figure 9.
[0176] Figure 16 is a view illustrating a method of correcting a prediction sample based on displacement in accordance with an embodiment of the present invention.
[0177] Referring to Figure 9, for a current block, whether updating a first prediction sample using a displacement can be determined in step S900. If updating the first prediction sample using the offset can be determined by a decoded flag of a bit stream. For example, a syntax 'is_sub_block_refinement_flag' indicating whether updating the first prediction sample using the offset can be signaled through a bit stream. When a value of "is_sub_block_refinement_flag" is 1, the method of updating the first prediction sample using the offset can be used in the current block. When a value of "is_sub_block_refinement_flag" is 0, the method of updating the first prediction sample using the offset can not be used in the current block. However, step S900 is intended to perform selectively update of the first prediction sample, and is not an essential configuration for achieving the purpose of the present invention, so that step S900 may be omitted in some cases.
[0178] When it is determined that the method of updating the first prediction sample using the offset is used, an intra prediction pattern of the current block in step S910 can be determined. Through the intra prediction pattern, all or some regions of the current block to which the displacement is applied can be determined, a type of partitioning of the current block, if applying the displacement to a sub-block including the current block, a size / sign of the offset assigned to
[0179] each subblock, etc.
[0180] One of multiple predefined patterns on the device to encode / decode a video can be used
[0181] selectively as the intra-prediction pattern of the current block, and for this purpose, an index can be signaled that specifies the intra-prediction pattern of the current block from a bit stream. As another example, the intra-prediction pattern of the current block can be determined based on
[0182] a partition mode of a prediction unit or a coding unit of the current block, a block size / shape, if the directional intra prediction mode, an angle of the directional intra prediction mode, etc. are used.
[0183] If it is determined by default flag information signaled by a bit stream if it is
[0184] indicates or not an index that indicates the pattern of intra prediction of the current block. For example, when the flag information indicates that the index indicated by the intra prediction pattern of the current block is
[0185] signaling from a bit stream, the intra prediction pattern of the current block can be determined based on an index decoded from a bit stream. At this point, the flag information
[0186] can be signaled in at least one of an instantaneous level, a cut level and a block level.
[0187] When the flag information indicates that the index indicating the intra prediction pattern of the current block is not signaled from a bit stream, the intra prediction pattern of the current block can be determined based on the partition mode of the prediction unit or the coding unit
[0188] of the current block, etc. For example, the pattern in which the current block is partitioned into subblocks
[0189] it can be the same as the pattern in which the coding block is partitioned into prediction units.
[0190] When the intra-prediction pattern of the current block is determined, the displacement can be obtained
[0191] in subblock units in step S920. The displacement can be signaled in units of one
[0192] cut, a coding unit or a prediction unit. As another example, the displacement
[0193] it can be derived from a sample neighboring the current block. Displacement can include at least
[0194] one of displacement information information and displacement sign information. In this point,
[0195] the displacement value information may be in a range of integer numbers greater than or equal to zero.
[0196] When the displacement is determined, a second prediction sample can be obtained for each sub-block in step s 930. The second prediction sample can be obtained by applying the displacement to the first prediction sample. For example, the second prediction sample can be obtained by adding or subtracting the displacement to or from the first prediction sample.
[0197] Figures 10 to 14 are views illustrating examples of an intra-prediction pattern of a current block
[0198] according to an embodiment of the present invention. The numerical references in figures 10 to 14
[0199] denote the following: 1001 - index 0, 1002 - index 1, 1003 - index 2, 1004 - index 3, 1101 - index 0,
[0200] 1102 - index 1, 1103 - index 2, 1104 - index 3, 1210 - Categorization 0, 1211 - index 0, 1212 - index 1,
[0201] 1213: index 2, 1214: index 3, 1220: Category 1, 1221: index 4, 1222: index 5, 1223: index 6, 1224:
[0202] index 7, 1230: Category 2, 1231: index 8, 1232: index 9, 1233: index 10, 1234: index 11, 1301:
[0203] index 0, 1302: index 1, 1303: index 2, 1304: index 3, 1305: index 4, 1306: index 5, 1401: index 0, 1402: index 1, 1403: index 2, 1404: index 3, 1405: index 4, 1406: index 5, 1407: index 6, 1408: index 7, 1409: index 8, 1410: index 9, 1411: index 10, 1412: index 11.
[0204] For example, in the example shown in Figure 10, when the index is '0' or '1', the current block can be partitioned into upper and lower subblocks. Displacement may not be established to the sub-block
[0205] higher, and the displacement 'f can be set to the lower sub-block. Therefore, the first sample
[0206] of prediction (P (i, j)) can be used as it is in the upper subblock, and the second prediction sample (P (i, j) + fo P (i, j) -f) that is generated by adding or subtracting the displacement to or from the first
[0207] Prediction sample can be used in the lower subblock. In the present invention, 'not established'
[0208] it can mean that the displacement is not assigned to the block, or the displacement that has the value
[0209] from '0' can be assigned to the block.
[0210] When the index is '2' or '3', the current block is partitioned in the left and right subblocks. The offset may not be set for the left sub-block, and the offset 'f may be set for the right sub-block. Therefore, the first prediction sample (P (i, j)) can
[0211] used as it is in the left sub-block, and the second prediction sample (P (i, j) + f or P (i, j) -f) that is
[0212] generates adding or subtracting the offset to or from the first prediction sample can
[0213] be used in the right subblock.
[0214] The range of available intra prediction patterns can be limited based on the intra mode
[0215] prediction of the current block. For example, when the intra prediction mode of the current block is a vertical direction intra prediction mode or a prediction mode in a direction similar to the vertical direction intra prediction mode (for example, among the 33 directional prediction modes, when the intra prediction mode has an index of 22 to 30), only the intra prediction pattern that partitions the current block into a horizontal direction (for example, index 0 or index 1 in Figure 17) can be applied to the current block .
[0216] As another example, when the intra prediction mode of the current block is an intra-prediction mode of horizontal direction or a prediction mode in a direction similar to the intra-prediction mode of horizontal direction (for example, among the 33 modes of directional prediction , when the intra prediction mode has an index of 6 to 14), only the intra prediction pattern that partitions the current block into a vertical direction (for example, index 2 or index 3 in Figure 17) can be applied to the block current.
[0217] In Figure 10, the offset is not set for one of the subblocks included in the current block, and is set for the other. Whether to set the offset for the sub-block can be determined based on information signaled for each sub-block.
[0218] Whether to set the displacement for the sub-block can be determined based on a sub-block position, an index to identify the sub-block in the current block, etc. For example, based on a predetermined limit of the current block, the offset may not be set for the sub-block that is adjacent to the predetermined limit, and the offset may be set for the sub-block that is not adjacent to the predetermined limit.
[0219] When it is assumed that the default limit is the upper limit of the current block, under the intra prediction pattern corresponding to the '0' or '1' index, the offset may not be established for the sub-block that is adjacent to the upper limit of the current block , and the offset can be set for the sub-block that is not adjacent to the upper limit of the current block.
[0220] When it is assumed that the default limit is the left limit of the current block, under the intra prediction pattern corresponding to the index '2' or '3', the offset may not be established for the sub-block that is adjacent to the left limit of the current block , and the offset can be set for the sub-block that is not adjacent to the left limit of the current block.
[0221] In Figure 10, it is assumed that the offset is not established for one of the subblocks included in the current block and that the offset is established for another. As another example, different offset values can be set for the subblocks included in the current block. An example of where different offsets are established for each sub-block will be described with reference to Figure 11.
[0222] Referring to Figure 11, when the index is' 0 'or' 1 ', the offset' h 'can be set for the upper subblock of the current block, and the offset' f can be set for the lower subblock of the current block. Therefore, the second prediction sample (P (i, j) + ho P (i, j) -h) can be generated by adding or subtracting the displacement 'h' to or from the first prediction sample in the upper sub-block, and the second prediction sample (P (i, j) + fo P (i, j) -f) can be generated by adding or subtracting the displacement 'fao from the first prediction sample.
[0223] Referring to Figure 11, when the index is' 2 'or' 3 ', the offset' h 'can be set for the left sub-block of the current block, and the offset' f can be set for the right sub-block of the current block. Therefore, the second prediction sample (P (i, j) + ho P (i, j) -h) can be generated by adding or subtracting the displacement 'h' to or from the first prediction sample in the left sub-block, and the second prediction sample (P (i, j) + fo P (i, j) -f) can be generated by adding or subtracting the displacement 'fao from the first prediction sample in the right sub-block.
[0224] In Figures 10 and 11, the current block is partitioned into two sub-blocks having the same size, but the number of sub-blocks and / or the size of the sub-blocks included in the current block is not limited to the examples shown in Figures 10. and 11. The number of subblocks included in the current block may be three or more, and the subblocks may have different sizes.
[0225] When multiple intra-prediction patterns are available, the available intra-prediction patterns can be grouped into multiple categories. In this case, the intra-prediction pattern of the current block can be selected based on a first index to identify a category and a second index that identifies an intra-prediction pattern in the category.
[0226] An example where the pattern of intra prediction of the current block is determined based on the first index and the second index will be described with reference to Figure 12.
[0227] In the example shown in Figure 12, 12 intra prediction patterns can be classified into three categories including each of four intra prediction patterns. For example, intra-prediction patterns that correspond to indices 0 to 3 can be classified as a category 0, intra-prediction patterns that correspond to indices 4 to 7 can be classified as a category 1, and intra-prediction patterns that correspond to indices 8 to 11 can be classified as a category 2.
[0228] The device for decoding a video can decode the first index from a bitstream to specify the category that includes at least one pattern of intra prediction. In the example shown in Figure 12, the first index can specify one of the categories 0, 1, and 2.
[0229] When the category is specified based on the first index, the intra-prediction pattern of the current block can be determined based on the second decoded index from a bit stream. When category 1 is specified by the first index, the second index can specify one of the four intra prediction patterns (that is, from index 4 to index 7) of category 1.
[0230] In Figure 12, it is shown that the categories include the same numbers of intra prediction patterns. But there is no need for categories to include the same numbers of intra prediction patterns.
[0231] The number of intra prediction patterns available or the number of categories can be determined in units of a sequence or a cut. Also, at least one of the number of intra prediction patterns available and the number of categories can be signaled through a sequence heading or a cut heading.
[0232] As another example, the number of intra prediction patterns available and / or the number of categories can be determined based on a size of a prediction unit or a coding unit of the current block. For example, when the size of the current block (for example, the coding unit of the current block) is greater than or equal to 64x64, the intra-prediction pattern of the current block can be selected from five intra-prediction patterns shown in the Figure 13. In contrast, when the size of the current block (for example, the coding unit of the current block) is less than 64x64, the intra prediction pattern of the current block can be selected from the intra prediction patterns shown in Figure 10. , 11 or 12.
[0233] In Figures 10 to 13, it is shown that the sub-blocks included in each intra-prediction pattern have a rectangular shape. As another example, the pattern of intra prediction can be used where at least one of the sizes and shapes of the sub-blocks are different from each other. For example, Figure 14 is a view illustrating an example of an intra prediction pattern with different sizes and subblock shapes.
[0234] The offset for each sub-block (for example, the offset h, f, g, oi of each sub-block shown in Figures 10 to 14) can be decoded from a bitstream, or it can be derived from the neighbor sample adjacent to the current block .
[0235] As another example, the displacement of the sub-block can be determined by considering the distance from a sample at a particular position in the current block. For example, the offset can be determined in proportion to a value representing the distance between a sample in a predetermined position in the current block and a sample in a predetermined position in the sub-block. As another example, the displacement of the sub-block can be determined by adding or subtracting a determined value based on the distance between a sample in a predetermined position in the current block and a sample in a predetermined position in the sub-block a or from a preset value. As another example, the offset can be determined based on a ratio of a value representing the size of the current block and a value representing the distance between a sample in a predetermined position in the current block and a sample in a predetermined position in the sub-block .
[0236] At this point, the sample in the default position in the current block may include a sample adjacent to the left limit of the current block, a sample located at the upper limit of the current block, a sample adjacent to the upper left corner of the current block, etc. .
[0237] Figure 15 is a view illustrating a method of performing prediction using an intra block copy scheme according to an embodiment of the present invention. In Figure 15, the blocks with gray color denote reconstructed blocks that can be referred to when the intra block copy is applies to the current block. Additionally, the numerical references in Figure 15 denote the following: 1501 - Current Block, 1502 - Movement Vector, 1503 - Predictor.
[0238] The intra-block copy (IBC) is a method where a current block is predicted / reconstructed using a block already reconstructed (hereinafter referred to as 'a reference block') in the same instantaneous as the current block. If a snapshot contains a large number of letters, such as a Korean alphabet, an alphabet, etc., and the letter that is contained in the current block when the current block is rebuilt is contained in an already decoded block, the intra block copy it can improve a coding / decoding performance.
[0239] An intra-block copy method can be classified as an intra-prediction method or an inter-prediction method. When the intra block copy method is classified as the intra prediction method, an intra prediction mode can be defined for the intra block copy method. When the intra-block copy method is classified as the inter-prediction method, a bit stream may include a flag indicating whether to apply the intra-block copy method to the current block. As an alternative, if the current block uses intra block copy, it can be confirmed through a reference instant index of the current block. That is, when the reference snapshot index of the current block indicates the current snapshot, inter prediction can be performed in the current block using intra block copy. For this purpose, a current pre-rebuilt snapshot can be added to a list of reference snapshots for the current block. The current instantaneous can exist in a fixed position in the list of reference snapshots (for example, a position with the reference snapshot index of 0 or the last position). As an alternative, the current instantaneous can have a variable position in the list of reference snapshots, and for this purpose, a reference snapshot index indicating a current snapshot position can be signaled separately.
[0240] To specify the reference block of the current block, a position difference between the current block and the reference block can be defined as a motion vector (hereinafter referred to as a block vector).
[0241] The block vector can be derived by a sum of a prediction block vector and a differential block vector. The device for encoding a video can generate a prediction block vector through predictive coding, and can encode the differential block vector which indicates the difference between the block vector and the prediction block vector. In this case, the device for decoding a video can derive the block vector from the current block using the derived prediction block vector using pre-decoded information and the differential block vector decoded from a bit stream.
[0242] At this point, the prediction block vector can be derived based on the block vector of a neighboring block adjacent to the current block, the block vector in an LCU of the current block, the block vector in a row / column of LCU of the current block, etc.
[0243] The device for encoding a video can encode the block vector without performing predictive coding of the block vector. In this case, the device for decoding a video can obtain the block vector by decoding the block vector information signalized through a bit stream. The correction method can be performed in the prediction / reconstructed sample generated through the intra block copy method. In this case, the correction process described with reference to Figures 6 to 14 can be applied in an equal / similar manner, and therefore the detailed description thereof will be omitted. Figure 16 shows a range of reference samples for intra prediction according to an embodiment to which the present invention is applied.
[0244] Referring to Figure 16, intra prediction can be performed using the reference samples P (-1, -1), P (-1, y) (0 <= and <= 2N-1) and P (x, -1 ) (0 <= x <= 2N-1) located in a limit of a current block. At this time, filtering is performed selectively on reference samples based on at least one of an intra prediction mode (eg, index, directionality, angle, etc., of the intra prediction mode) of the current block or a size of a transform block related to the current block.
[0245] At least one of a plurality of filter candidates can be selected to perform filtering on reference samples. At this point, the plurality of intra-filter candidates may differ from each other in at least one of a filter intensity, a filter coefficient or a derivation number (eg, a number of filter coefficients, a filter length). ). A plurality of intra-filter candidates can be defined in at least one of a sequence, an instantaneous, a cut, or a block level. That is, a sequence, an instantaneous, a cut, or a block in which the current block is included can use the same plurality of intra-filter candidates.
[0246] In the following, for convenience of explanation, it is assumed that a plurality of intra-filter candidates includes a first intra-filter and a second intra-filter. It is also assumed that the first intra-filter is a 3-lead filter (1.2, 1) and the second intra-filter is a 5-lead filter (2, 3, 6, 3, 2).
[0247] When reference samples are filtered by applying a first intra-filter, the filtered reference samples can be derived as shown in Equation 7.
[0248] [Equation 7]
[0249]
[0250] P (- 11) = (P (- 1,0) + 2 P (-11) + P (0, -1) 2) »2
[0251]
[0252] P (- 1,> 0 = (P (-1 ,Y + 1) + 2 P (-1 or 0 P (-1 , Y - 1) 2) »2
[0253] P (x, ~ 1) = (P O 1 1) + 2 P (jc, - 1) P (> - 1 1) 2) »2
[0254] When reference samples are filtered by applying the second intra-filter, the filtered reference samples can be derived as shown in the following equation 8.
[0255] [Equation 8]
[0256]
[0257] P (- 11) = (2P (-2, 0) + 3P (-1,0) + 6P (-11) + 3P (0, -1) + 2 P (0, -2) + 8) »4
[0258] P (- 1 , y) = (2P (- 1 , y + 2) + 3P (- 1 , y + 1) 6 P (- 1 , y) + 3 P (- 1 "y-1) + 2 P ( - 1, j - 2) 8) »4
[0259]
[0260] P (*, - 1) = (2P (; c + 2, -1) + 3P (; c + 1, -1) + 6P (*, - 1) + 3P (jc-1, -1) + 2P (; c-2, -1) +8) »4
[0261] Based on a position of a reference sample, one of a plurality of intra-filter candidates can be determined and used to perform filtering on the reference sample using the determined one. For example, a first intra-filter can be applied to a reference sample in a limit of a current block, and a second intra-filter can be applied to other reference samples. Specifically, as shown in Figure 17, filtration is performed on reference samples 1701 (p-eg, P (-1, -1), P (-1.0), P (-1.1), ..., P (-1, N-1) and P (0, -1), P (1, -1)), ... applying a first intrafilter as shown in Equation 7 (eg. , filter with coefficients (1,2,1)), and filtering in the other reference samples 1702 is performed by applying a second reference filter as shown in Equation 8 (eg, filter with coefficients (2, 3,6,3,2)).
[0262] It is possible to select one of a plurality of intra-filter candidates based on a type of transform used for a current block, and to perform filtering on reference samples using the selected one. At this point, the type of transform can mean (1) a transform scheme such as DCT, DST or KLT, (2) a transform mode indicator such as a 2D transform, 1D transform or untransformed or (3) the number of transformations such as a first transform and a second transform. In the following, for convenience of description, it is assumed that the type of transform means the transform scheme such as DCT, DST and KLT.
[0263] For example, if a current block is coded using a DCT, filtering can be performed using a first intra-filter, and if a current block is coded using a DST, filtering can be performed using a second intra-filter. Or, if a current block is coded using DCT or DST, filtering can be performed using a first intra-filter, and if the current block is coded using a KLT, filtering can be performed using a second intra-filter.
[0264] Filtering can be performed using a filter selected based on a type of transform of a current block and a position of a reference sample. For example, if a current block is coded using a d C t , filtering can be performed on the reference samples P (-1, -1), P (-1.0), P (-1.1), .. ., P (-1, N-1) and P (0, -1), P (1, -1), ..., P (N-1, -1) using a first intra-filter, and can filtering in other reference samples using a second intra-filter. If a current block is coded using a DST, filtering can be performed on the reference samples P (-1, -1), P (-1.0), P (-1.1), ..., P (- 1, N-1) and P (0, -1), P (1, -1), ..., P (N-1, -1) using a second intra-filter, and filtering can be performed on other samples of reference using a first intra-filter.
[0265] One of a plurality of intra-filter candidates can be selected based on whether a transformation type of a neighboring block that includes a reference sample is the same as a transformation type of a current block, and the filtering can be performed using the candidate of intra-filter selected. For example, when a current block and a neighboring block use the same type of transform, filtering is performed using a first intra-filter, and when the transformation types of a current block and a neighboring block are different from each other, it can be used the second intra-filter for Perform filtering.
[0266] It is possible to select any one of a plurality of intra filter candidates based on a type of transform from a neighboring block and perform filtering on a reference sample using the selected one. That is, a specific filter can be selected taking into account a type of transform of a block in which a reference sample is included. For example, as shown in Figure 18, if an adjacent block 18l0 to the left / left bottom part of a current block is a block coded using a DCT, and an adjacent block 1820 to the top / top right of a block current is a block encoded using a DST, filtering is performed on adjacent reference samples 1802 to the left / lower left part of a current block by applying a first intra filter (eg, filter with coefficients (1,2, 1) ) and filtering is performed on reference samples 1801 adjacent to the upper / upper right portion of a current block by applying a second intra filter (eg, filter with coefficients (2,3,6,3,2)).
[0267] In units of a predetermined region, a usable filter can be defined in the corresponding region. In the present document, the unit of the predetermined region can be any one of a sequence, an instantaneous, a section, a group of blocks (for example, a row of units of coding tree) or a block (for example, a coding tree unit). Or, another region that shares one or more filters can be defined. A reference sample can be filtered using a filter mapped to a region in which a current block is included.
[0268] For example, as shown in Figure 19, it is possible to perform filtering on reference samples using different filters in CTU units. In this case, the information that indicates whether the same filter is used in a sequence or an instantaneous one, a filter type used for each CTU, an index that specifies a filter used in the corresponding CTU among available intra-filter candidates can be signaled using a set of sequence parameters (SPS) or a set of instantaneous parameters (PPS). In Figure 19, the numeric reference 1910 denotes CTUs to which a first filter with coefficients (1,2,1) is applied, and the 1920 numeric reference denotes CTUs to which a second filter with coefficients is applied (2,3 , 6.3.2).
[0269] A coding unit can be divided into at least one or more transform units. At this time, a transform unit may have a square shape or a non-square shape, depending on a partition mode for the transform unit.
[0270] A coding unit can be divided into two or more transform units by at least one of a horizontal line dividing the coding unit up and down or a vertical line dividing the coding unit to the left and to the right. As an example, a quadruple tree division can be used in which a coding unit is divided into four transform units using a horizontal line dividing the coding unit up and down and a vertical line dividing the coding unit. to the left and to the right. Or, a binary tree division can be used in which a coding unit is divided into two transform units using either a horizontal line or a vertical line. In addition to the example described above, a division scheme can also be used that divides a coding unit into a plurality of transform units using a plurality of vertical lines or a plurality of horizontal lines.
[0271] It can be specified by a syntax element signaled from a bit stream if a coding unit is divided into a quadruple tree type. This can be a 1 bit flag, but it is not limited to the same.
[0272] Additionally, it can be specified by a syntax element signaled from a bit stream if a coding unit is divided into a binary tree type. This can be a 1 bit flag, but it is not limited to the same. When it is indicated that a coding unit is divided into a binary tree type, information indicating a division address of a coding unit may be additionally signaled. At this time, the division address can indicate whether a coding unit is divided into a vertical line or a horizontal line.
[0273] A transform unit generated by dividing a coding unit can be divided again into at least one transform unit. For example, a transform unit generated by dividing a coding unit can be divided into a quadruple tree or binary tree type.
[0274] Figure 20 is a diagram showing an example in which a coding unit is divided into blocks that have a square shape.
[0275] When it is determined that a coding unit is divided into a quadruple tree type, the coding unit can be divided into four transform units. When the coding unit is divided into four units of transformation, it is possible to determine, for each unit of transformation, whether to further divide the unit of transformation. As an example, for four transform units, it can be determined whether to divide each transform unit into a quadruple tree type or binary tree.
[0276] For example, in the example shown in Figure 20, it is illustrated that a first and second transform units among four transform units generated by dividing a coding unit 2010 into quadruple tree type are divided again into a quadruple tree type. Further, it is illustrated that a first and third transform units among four transform units generated by dividing the second transmission unit 2020 are divided again into a quadruple tree type. As such, a transform unit can be recursively partitioned. Accordingly, a coding unit may include transform units of different sizes, as shown in Figure 20.
[0277] Figure 21 is a diagram showing an example in which a coding unit is divided into blocks having a non-square shape.
[0278] As a prediction unit, a coding unit can be divided to have a non-square shape. In one example, a coding unit can be divided as a form of Nx2N 2110 or 2NxN 2120. When a coding unit is divided to have a non-square shape, it is possible to determine for each transform unit, whether to further divide the transform unit . As an example, for two transform units, it can be determined whether to further divide each transform unit into a quadruple tree type or binary tree.
[0279] For example, in the example shown in Figure 21 (a), it is illustrated that a first block between Nx2N blocks generated by dividing a coding unit to have a non-square shape is divided into a quadruple tree type. Further, it is illustrated that a first transform unit between four transform units generated by dividing the first transform unit into a quadruple tree type is divided into a quadruple tree type again.
[0280] Furthermore, in the example shown in Figure 21 (b), it is illustrated that a first and second blocks between 2Nx2N blocks generated by dividing a coding unit into a quadruple tree type are divided into a binary tree type of 2NxN. Further, it is illustrated that a first transform unit between two transform units generated by dividing the first transform unit into a binary tree type of 2NxN is divided into a quadruple tree type.
[0281] A partition mode for a transform unit or if the transform unit is further divided can be determined according to at least one of a size of the transform unit, a shape of the transform unit, a prediction mode or a mode of partition for a prediction unit.
[0282] For example, a partition mode for a transform unit can be restricted by a form of the transform unit. As a specific example, when a transform unit has a shape that a height is longer than a width, a partition mode for the transform unit can be restricted to that of a lower node transform unit generated as a result of the division of the transform unit having a shape that a width is longer than a height. For example, when a higher node transform unit has a shape of 2NxN, a lower node transform unit included in the upper node transform unit can be restricted to having a shape of 2NxN. Accordingly, a partition mode for the upper node transform unit can be restricted to a quadruple tree type.
[0283] Alternatively, when a height of a transform unit is longer than a width, a partition mode for the transform unit may be restricted to that of a lower node transform unit generated as a result of the division of the transform unit which It has a shape of a height that is longer than a width.
[0284] For example, when a higher node transform unit has a shape of Nx2N, a lower node transform unit included in the upper node transform unit can be constrained to have a shape of Nx2N. Accordingly, a partition mode for the upper node transform unit can be restricted to a quadruple tree type.
[0285] As another example, a partition mode for a transform unit can be determined based on a partition mode for a prediction unit. As a specific example, when a prediction unit is divided as a square shape (for example, 2Nx2N), a transform unit can be divided as a square shape only. On the other hand, when a unit If the prediction is divided as a non-square shape (for example, 2NxN or Nx2N), a transform unit can be divided as a non-square shape only.
[0286] It can be stated that, when a transform unit has a non-square shape, the transform unit can be divided into a quadruple tree type only and can not be divided into a binary tree type. For example, when a transform unit has a shape of Nx2N or 2NxN, the transform unit can be divided into a quadruple tree type that includes four blocks of Nx2N or four blocks of 2NxN.
[0287] As another example, it can be established that a transform unit can not be further divided when the transform unit has a non-square shape.
[0288] A transform unit is a base unit of a transform, and an inverse transform can be made for each transform unit.
[0289] In the following, an example in which an inverse transform is performed in a current block with reference to Figure 22 will be described in detail. At this point, the current block can represent a transform unit (a transform block) which is a unit in which a reverse transformation is performed.
[0290] Figure 22 is a flow diagram illustrating a process of realizing an inverse transform for a current block.
[0291] First, a decoder can decode information indicating whether a reverse transform is omitted for a current block from a bitstream (S2210).
[0292] The information indicates whether at least one of a reverse transform is omitted for a vertical direction or an inverse transform for a horizontal direction for a current block. At this point, the information can be a 1-bit flag (for example, 'transform_skip_flag'), but it is not limited to the same.
[0293] If the information indicates that a reverse transform is omitted for a current block, at least one of a vertical direction inverse transform or a horizontal direction inverse transform can be omitted for the current block. At this time, it is determined adaptively based on a size, a shape or a prediction mode of the current block if one of the inverse transform of vertical direction and the inverse transform of horizontal direction is omitted for the current block.
[0294] For example, if a current block is a non-square shape that has a width greater than a height (for example, when the current block has a shape of 2NxN), a vertical direction inverse transform can be omitted, but a transform can not be omitted inverse of horizontal direction. If a current block is a non-square shape that has a height greater than a width (for example, when the current block has a shape of Nx2N), an inverse transform of horizontal direction can be omitted, but a reverse direction transformation can not be omitted vertical. On the other hand, when a current block is a square shape, both a vertical direction inverse transform and a horizontal direction inverse transform can be omitted.
[0295] As another example, if the information indicates that an inverse transform is omitted for a current block, the decoder may additionally decode information indicating an omitted address of an inverse transform from a bit stream. At this point, the information indicating the omitted address of the inverse transform may indicate a horizontal direction, a vertical direction, or both directions.
[0296] The decoder may omit at least one of an inverse transform of horizontal direction and a reverse transformation of vertical direction for a current block based on the information indicating the omitted address of the inverse transform.
[0297] The information indicating whether an inverse transform is omitted for a current block may include information indicating whether a reverse transformation of a horizontal address is omitted for the current block and information indicating if a reverse transformation of a vertical direction is omitted for the block current. Each piece of information can be a 1-bit flag (for example, 'hor_transform_skip_flag' indicating whether a reverse horizontal direction transform is omitted or 'ver_transform_skip_flag' indicating whether a reverse vertical direction transform is omitted), but not limited to the same.
[0298] At this time, the information indicating whether a reverse transformation of a horizontal direction and the information indicating whether a reverse transformation of a vertical direction can be signaled adaptively according to a size, type, or prediction mode of a current block.
[0299] For example, when a current block has a non-square shape, the decoder can decode only one of information that indicates whether an inverse transform of a horizontal direction is omitted or the information indication of whether an inverse transform of a vertical direction is omitted. a bit stream As a specific example, when a current block is a non-square shape that has a width greater than a height, information indicating whether an inverse transform of a vertical direction through a bitstream can be omitted can be signaled, while it may not signaling information indicating if a reverse transformation of a horizontal direction is omitted through the bit stream. When a current block is a non-square shape that has a height greater than a width, the information that indicates whether a reverse transformation of a horizontal address can be signaled through a bit stream is omitted, while the information that indicates that a reverse transformation of a vertical direction may not be signaled through the bitstream. On the other hand, when a current block has a square shape, the decoder can decode both the information indicating if an inverse transform of a horizontal direction is omitted and the information indicating if an inverse transform of a vertical direction of a flow is omitted. of bits. If it is determined to perform a reverse transform for at least one of a vertical direction or a horizontal direction for a current block (S2220), the decoder may determine a type of transform for the current block (S2230). The decoder can perform a reverse transform for the current block based on the type of transform for the current block (S2240).
[0300] The type of transform includes transform schemes such as DCT, DST or KLT. At this point, the DCT comprises at least one of DCT-II or DCT-VIII and the DST comprises at least one of DST-I or DST-VII.
[0301] A transformation type of a current block can be determined based on a prediction mode of the current block and a size of the current block. For example, if a current block is a 4x4 block encoded in an intra mode, a reverse transform is performed using the DST-VII, and if the current block does not satisfy these conditions, a reverse transform can be performed using the DCT-II.
[0302] As an example, a DST-VII matrix for a 4x4 block can be expressed as A4 below.
[0303]
[0304]
[0305]
[0306] A reverse transform can be performed using DST-VII using an A41 matrix of reverse DST-VII. A DCT-II matrix for an 8x8 block can be expressed as Ts below.
[0307]
[0308]
[0309] A reverse transform using the DCT-II can be performed using an inverted DCT-II T b T matrix. As another example, the decoder can determine a transform set for a current block and determine a type of transform for the current block based on the given transform set. At this point, the transform set can be obtained in units of a transform unit, a coding unit or a coding tree unit. As another example, the transform set can be obtained for a transform unit or a coding unit whose size or depth is greater than or equal to a predetermined value.
[0310] For convenience of explanation, in the embodiments described below, it is assumed that a transform set is obtained for a transform unit including a plurality of transform units or for a coding unit including a plurality of transform units. And, a unit in which the transform set is obtained will be referred to as a 'base unit'.
[0311] First, when a transform set is determined in units of a coding unit or a transform unit, a type of transform of a plurality of transform units included in a base unit can be determined based on the determined transform set. . At this time, the transform set can include a plurality of transform types.
[0312] A transform type of a transform unit included in a base unit can be determined to be at least one of a plurality of transform types included in a transform set. For example, a transform type of a transform unit can be determined adaptively according to a shape of a transform unit, a size of a transform unit, or a prediction mode. As a specific example, when it is assumed that a transform set includes two types of transform, a transformation type of a transform unit can be determined as the first of the two types of transform when the transform unit to be transformed inversely satisfies a predetermined condition. On the other hand, when a transform unit to be transformed in reverse does not satisfy the predetermined condition, a transformation type of the transform unit can be determined as a second of the two types of transform.
[0313] Table 3 is a graph showing candidates of transform set.
[0314] [Table 3]
[0315]
[0316]
[0317]
[0318]
[0319] The transform sets can be identified by transform set indices and a transform set of a current block can be specified by index information indicating a transform set index. The index information related to the transform set of the current block can be decoded from a bit stream.
[0320] In Table 3, the 'transform type 0 candidate' indicates a type of transform used when a transform unit satisfies a predetermined condition, and the 'transform type candidate 1' indicates a type of transform used when a unit of transformed does not satisfy a predetermined condition. At this point, the predetermined condition can represent whether a transform unit has a square shape, if a size of a transform unit is less than or equal to a predetermined value, if a coding unit is coded in an intra prediction mode, and similar.
[0321] For example, the predetermined condition can indicate whether a transform unit is a block equal to or less than 4x4 encoded in an intra prediction mode. In this case, a candidate of type of transform 0 is applied to a 4x4 transform unit encoded in an intra prediction mode that satisfies the predetermined condition, and a candidate of type of transform 1 is applied to other transform units that do not they satisfy the predetermined condition. For example, when you select a transform set 0 as a transform set of a current block, if the current block is a block equal to or smaller than 4x4 encoded in an intra-prediction mode, DST-VII is determined as a type of transform. And if a current block is not a block equal to or less than 4x4 encoded in an intra prediction mode, DCT-II can be applied.
[0322] The predetermined condition can be set in a variable manner according to a transform set. For example, when selecting a transform set 0 or a transform set 1 as a transform set for a current block, one of a candidate of type of transform 0 or a candidate of type of transform 1 can be determined as a type of transform of the current block according to whether the current block is a block less than or equal to 4x4 encoded in the intra prediction mode. On the other hand, when a transform set 2 is selected as a transform set for a current block, one of a candidate of type of transform 0 or a candidate of type of transform 1 can be determined as a type of transform of the current block of according to whether the current block is a block less than or equal to 8x8 encoded in the intra prediction mode.
[0323] Therefore, when determining a transform set for a base unit as a transform set 0, the DST-VII can be applied to a 4x4 transform unit encoded in an intra prediction mode included in the base unit, and DCT-II can be applied to other transform units. On the other hand, when determining a transform set for a base unit as a transform set 2, DST-VII can be applied to a 4x4 transform unit encoded in an intra prediction mode or to a coded 8x8 transform unit in an intra prediction mode included in the base unit, and DCT-VIII can be applied to other transform units.
[0324] The predetermined condition can be determined adaptively according to a size, a shape, or a prediction mode of a base unit. For example, when a size of a base unit (for example, a coding unit) is less than or equal to 32x32, the predetermined condition can indicate whether a transform unit is a block less than or equal to 4x4 encoded in a mode of intra prediction. On the other hand, when a size of a base unit (eg, a coding unit) is larger than 32x32, the predetermined condition can indicate whether a transform unit is a block less than or equal to 8x8 encoded in a mode of intra prediction.
[0325] Therefore, when a size of a base unit is less than or equal to 32x32, a candidate of transform type 0 can be applied to a 4x4 transform unit encoded in an intra prediction mode included in the base unit, and a candidate of type of transform 1 can be applied to other transform units. On the other hand, when a size of a base unit is longer than 32x32, a candidate of transform type 0 can be applied to a 4x4 transform unit encoded in an intra prediction mode or an 8x8 transform unit encoded in a intra prediction mode included in the base unit, and a candidate of transform type 1 can be applied to other transform units.
[0326] If it is determined to perform both a reverse transformation of a horizontal direction and a reverse transformation of a vertical direction for a current block, a transform set for the horizontal direction of the current block and a transform set for the vertical direction can be determined separately of the current block. For example, a reverse transformation of a horizontal direction of a current block is performed using a transform set 0, and a reverse transformation of a vertical direction of the current block is performed using a transform set 1. If using different transform sets for A horizontal direction and a vertical direction of the current block can be determined adaptively based on a size, a shape or a prediction mode of a current block or a base unit.
[0327] For example, Figure 23 shows an example in which a transform set of a transform unit is determined based on an intra prediction mode of a prediction unit. In Figure 23, the thin dotted line denotes intra prediction modes that use the same transformation set for horizontal transformation and for vertical transformation. The thick dotted line denotes intra prediction modes that use different transformation sets for horizontal transformation than for vertical transformation.
[0328] If using different transform sets for a horizontal direction and a vertical direction can be determined according to an intra prediction mode of a prediction unit corresponding to a current block. For example, when an intra prediction mode of a prediction unit is a vertical direction or a horizontal direction, or when an intra prediction mode of a prediction unit is similar to a vertical direction or similar to a horizontal direction, they can be used different Transform sets for a horizontal direction and a vertical direction of a current block. If an intra prediction mode of a prediction unit is not a directional mode, or is not the same or similar to a vertical direction or a horizontal direction, the same transform set can be used for a horizontal direction and a vertical direction of a current block.
[0329] At this point, an intra-prediction mode similar to a vertical direction or a horizontal direction is an intra-prediction mode that has a direction similar to an intra-prediction mode with a vertical direction or a horizontal direction. And the intra-prediction mode similar to the vertical direction or the horizontal direction means an intra-prediction mode in which a difference from the intra-prediction mode with the vertical direction or the horizontal direction is less than or equal to a threshold value . For example, as in the example shown in Figure 23, if the intra-prediction modes include 33 directional intra-prediction modes, the intra-prediction modes in which a difference from an intra-prediction mode with a horizontal direction or a vertical direction (10, 26) is equal to or less than ± 3 can be determined as the intra-prediction mode similar to a vertical direction or a horizontal direction.
[0330] As described above, a transform set of a base unit can be determined based on index information signaled through a bit stream. At this time, for the base unit, a transform set of a horizontal direction and a transform set of a vertical direction can be determined individually. The transform set of the horizontal address for the base unit can be specified by the first index information and the transform set of the vertical address for the base unit can be specified by the second index information. A transform type of a transform unit included in the base unit can be determined based on the transform set determined for the base unit.
[0331] As another example, a transform set for a current block can be determined based on at least one of a current block size, a current block shape, a prediction mode of the current block or a transform set of a decoded unit before the current block.
[0332] For example, it can be established that units having the same intra-prediction mode among a plurality of units included in a coding unit or a transform unit use the same transform set.
[0333] As a specific example, it is assumed that a first transform unit in a scan order has an intra prediction mode with a vertical direction, and that a transform set 2 is used for a horizontal direction and a transform set 0 is used for a vertical direction of the first transformation unit. In this case, a transform unit having an intra prediction mode with a vertical direction can have the same transform set as the first transform unit having the intra prediction mode with the vertical direction. Accordingly, the transform unit having the intra prediction mode with the vertical direction can use the transform set 2 for a horizontal direction and the transform set 0 for a vertical direction, as the first transform unit having the mode of intra prediction with the vertical direction.
[0334] In this case, index information may be signaled only for the first transform unit having the intra prediction mode with the vertical direction, and index information may not be signaled for the other transform unit having the intra prediction mode with the vertical direction.
[0335] Furthermore, it can be established that units having similar intra-prediction modes among a plurality of transform units included in a coding unit or a transform unit use the same transform set. At this time, the intra prediction modes included in a range where a difference from an intra prediction mode in a predetermined direction is less than or equal to a threshold value can be determined to be similar to each other. For example, when the predetermined direction is a horizontal or vertical direction, it can be determined that the intra prediction modes included in a range of ± 3 or less from the horizontal or vertical intra-prediction mode are mutually similar.
[0336] As a specific example, it is assumed that a first transform unit in a scan order has an intra prediction mode with a vertical direction or close to a vertical direction, and that a transform set 2 is used for a horizontal direction and uses a transform set 0 for a vertical direction of the first transform unit. In this case, a transform unit having an intra prediction mode with a vertical direction or close to a vertical direction may have the same transform set as another transform unit that has an intra prediction mode with a vertical direction or near a vertical direction. When it is assumed that an intra prediction mode in which a difference from a vertical direction is less than or equal to 3 is similar to a vertical direction, a transform unit having an intra prediction mode with any one of 23-29 can use a transform set 2 for a horizontal direction and a transform set 0 for a vertical direction, such as the first transform unit having the intra prediction mode with any one of 23-29.
[0337] In this case, index information may be signaled only for the first transform unit having the intra prediction mode with the vertical direction or near the vertical direction, and index information may not be signaled for the other transform unit having the intra prediction mode with the vertical direction or close to the vertical direction.
[0338] A transform set for a current block can be determined according to an intra prediction mode of a prediction unit related to the current block. For example, Table 4 illustrates transform set indices of a horizontal direction and a vertical direction according to an intra-prediction mode of a current block.
[0339] [Table 4]
[0340]
[0341]
[0342]
[0343]
[0344] As shown in Table 4, a transform set of a current block can be determined according to an intra prediction mode of a prediction unit related to the current block. If a base unit is coded with inter prediction, a process of determining a transform set for a current block can be omitted. That is, if the base unit is coded with inter prediction, the current block can be transformed in reverse without using the transform set. In this case, a transformation type of the current block can be determined as DCT, DST or KLT depending on a size and shape of the current block.
[0345] When a base unit is coded with inter prediction, a transform set for a current block is determined, and only a part of a plurality of transform type candidates included in the transform set for a reverse transformation of the current block can be used. . For example, when it is assumed that a transform set of a current block includes a candidate of type of transform 0 and a candidate of type of transform 1 as illustrated in Table 3, a type of transform of the current block can be determined to be the candidate of transform type 0 or the candidate of transform type 1, regardless of whether the current block satisfies a predetermined condition.
[0346] In Table 3 above, it is illustrated that a transform set includes two candidates of transform type. The number of transform type candidates included in a transform set is not limited to two. The number of transform type candidates included in a transform set can be one, three, four or more.
[0347] The maximum number of transform type candidates that a transform set can include can be determined based on information signaled from a bit stream. For example, information about the number of maximum candidates in a transform set is signaled by a cut or sequence header, and the maximum number of transform type candidates available in the cut or sequence can be determined by the information.
[0348] Meanwhile, the number of candidates included in a transform set or a candidate of type of transform included in a transform set can be adjusted according to information which indicates whether omitting a reverse transform is allowed in an instant. At this point, the information that indicates whether a reverse transform omission is allowed in a snapshot can be a 1-bit flag (for example, 'transform_skip_enabled_flag'), but it is not limited to the same. For example, if 'transform_skip_enabled_flag' indicates that a reverse transform omission is allowed in an instantaneous, a transform set can additionally include 'transform skip' as a candidate, as in the example of Table 5 If 'transform_skip_enabled_flag' indicates that a reverse transform omission is not allowed in an instantaneous, a transform set may not include 'transform skip' as a candidate, as in the example in Table 5.
[0349] [Table 5]
[0350]
[0351]
[0352]
[0353]
[0354] Since a transform set includes a plurality of transform type candidates, a reverse transform scheme using a transform type can be referred to as AMT (from the English "Adaptive Multiple Transform"). Meanwhile, if AMT is used (i.e., determining a transform type of a current block using a transform set) it can be determined selectively according to a size or depth of the current block or a base unit. For example, a current block can determine a type of transform using a transform set only when a size of a coding unit that includes the current block is less than or equal to a predetermined size. At this point, a maximum size of the encoding unit in which AMT is allowed may be signaled through a cut header or a sequence header.
[0355] Next, a process for obtaining a quantization parameter for a current block will be described.
[0356] In the coding process, a quantization parameter determines a quality of an image after a transformation process. A quantized transform coefficient can be obtained by dividing a transform coefficient obtained through a transformation process by a value specified by the quantization parameter.
[0357] In the decoding step, an unquantified transform coefficient is obtained by an inverse quantization which is performed by multiplying a quantized transform coefficient by a value specified by the quantization parameter.
[0358] Different quantification parameters can be applied for each block or each area within an instantaneous one. At this point, blocks or areas to which the same quantization parameter applies may be referred to as a 'quantization group' (QG).
[0359] To obtain a quantization parameter of a quantization group, a quantization parameter of an instantaneous or a cut and a quantization parameter difference value (DeltaQP, dQp) can be signaled. The quantization parameter difference value indicates a difference value between the quantization parameter of the quantization group and a quantization parameter prediction value. At this time, when the quantization group is a first group in a snapshot or in a cut, a quantification parameter of the snap or cut can be set as the quantization parameter prediction value of the quantization group. A size of a quantization group can be determined according to a syntax element indicating a size of the quantization group and a size of a coding tree unit. For example, Table 6 shows a size of a quantization group according to 'diff_cu_qp_delta_depth' which represents the size of the quantization group and a size of a coding tree unit.
[0360] [Table 6]
[0361]
[0362]
[0363]
[0364]
[0365] At this point, 'diff_cu_qp_delta_depth' represents a difference value between a size of a coding tree unit and a size of a quantization group.
[0366] A quantization parameter difference value may be signaled for a coding unit or a transform unit having a non-zero transform coefficient. When a coding tree unit is divided into a plurality of coding units, a quantization parameter difference value can be signaled for a coding unit or a transform unit having a non-zero transform coefficient.
[0367] In the coding step, the encoder can derive a quantization parameter difference value based on a value related to a current block (eg, an average value) to be encoded. In the following, a method of deriving a quantization parameter difference value based on a value related to a current block in detail will be described with reference to the figures. For convenience of explanation, in the embodiment described below, a value related to a current block is referred to as an average value of the current block.
[0368] Figure 24 is a flow chart illustrating a method of deriving a quantization parameter difference value according to an embodiment of the present invention. In the embodiment to be described below, a current block may mean a quantization group, a coding tree unit, a coding unit or a transform unit corresponding to units in which the quantization parameter is obtained .
[0369] To derive a quantization parameter difference value, an average value of a current block can be calculated (S2410). The average value of the current block can be obtained based on a prediction sample and a transform coefficient.
[0370] Specifically, the average value of the current block can be obtained by adding an average value of prediction samples in the current block and a value obtained by scaling a DC component (i.e., a DC coefficient) of a transform coefficient. At this point, the DC component can mean a coefficient corresponding to a DC component between transformed transform coefficients transforming residual samples. Equation 9 shows a derivation method of an average value of a current block.
[0371] [Equation 9]
[0372] intL = Average (Prediction) + scale * CC_coeff
[0373] A scale value for a DC coefficient can be a fixed value or a variable that is determined depending on a size of a current block.
[0374] In Equation 9, Mean (Prediction) represents an average value of prediction samples.
[0375] If an average value of a current block is calculated, a quantization parameter difference value of the current block can be determined based on the average value of the current block (S2420). The quantization parameter difference value can be derived by reference to a search table (LUT) that defines a relationship between the average value and the quantization parameter difference value.
[0376] At this point, the lookup table can be defined so that a small number of bits is used in a dark portion of an instant (ie, using a large quantization parameter) and a large number of bits in a bright portion is used of a snapshot (that is, using a small quantization parameter). Accordingly, as an average value of a current block increases, a quantization parameter difference value may tend to be reduced.
[0377] For example, Table 7 illustrates a mapping table that defines a quantization parameter difference value according to an average value.
[0378] [Table 7]
[0379]
[0380]
[0381]
[0382]
[0383] In Table 7, intL indicates an average value, and dQp indicates a difference value of quantization parameter.
[0384] A referenced lookup table for deriving a quantization parameter difference value can be determined according to a size of a current block. For example, a mapping table to be used to derive a quantization parameter difference value can be determined adaptively according to a size of a coding tree unit, a coding unit, a transform unit, or a unit of prediction.
[0385] Taking an encoding tree unit as an example, there is a possibility that a region that includes a coding tree unit that has a small size may have more complex texture or more objects than a region that includes a coding tree unit which has a large size. On the other hand, a region that includes a coding tree unit having a large size may be a more homogeneous region comprising simple texture or background than a region that includes a coding tree unit having a small size. Accordingly, the subjective image quality can be improved and the embodiment of the coding can be improved by allocating more bits to a small coding tree unit that is likely to have many complex textures (ie, using a small quantization parameter).
[0386] For this purpose, using different search tables depending on a size of a coding tree unit, a small size coding tree unit may have a small quantization parameter difference value and a size encoding tree unit large can have a large quantization parameter difference value. For example, if a size of a coding tree unit is larger than 32x32, a search table is used from Table 8 below, and if a size of a coding tree unit is less than 32x32, it can be used a search table of Table 7.
[0387] [Table 8]
[0388]
[0389]
[0390]
[0391]
[0392]
[0393]
[0394] As another example, a look-up table referenced to derive a quantization parameter difference value can be determined according to a prediction mode of a current block. For example, a look-up table to be used to derive a quantization parameter difference value may be determined adaptively, depending on whether a coding unit is encoded in an intra-prediction mode, is encoded in inter-prediction mode, or it is encoded in intra block copy.
[0395] For example, when a coding unit is encoded in intra prediction, a quantization parameter difference value can be derived using a correspondence table in Table 7. And, when a coding unit is encoded in an inter-prediction mode or an intra block copy, a quantization parameter difference value can be derived using a correspondence table in Table 8.
[0396] As another example, a search table referenced to derive a quantization parameter difference value can be determined according to a transform type of a current block or a transform set of a current block. For example, when a transformation type of a transform unit is DCT, a quantization parameter difference value can be derived using a look-up table in Table 7. And, when a transform type of a transform unit is DST , a quantization parameter difference value can be derived using a search table in Table 8.
[0397] As another example, a search table referenced to derive a quantization parameter difference value can be determined depending on whether a second transform is performed on a current block. For example, when a second transform is applied to a transform unit, a quantization parameter difference value can be derived using a look-up table in Table 7. And, when a second transform is not applied to a transform unit, a quantization parameter difference value can be derived using a search table in Table 8.
[0398] As another example, a look-up table referenced to derive a quantization parameter difference value can be determined based on a maximum pixel value, a minimum pixel value, or a difference between the maximum pixel value and the minimum pixel value in a current block. For example, when a maximum pixel value in a coding tree unit is smaller than a specific threshold value, a quantization parameter difference value can be derived using a look-up table in Table 7. And, when a value of Maximum pixel in the encoder tree unit is larger than a specific threshold value, a quantization parameter difference value can be derived using a look-up table in Table 8.
[0399] In the previous example, it is described that the residual value of quantization parameter is determined using an average value of a current block, and it is also described that the average value is determined by a prediction sample and a DC coefficient of the current block. Unlike the description, the residual value of the quantization parameter can be determined by a prediction sample or a DC coefficient. Alternatively, a prediction sample can be determined based on an average value of a current block, and the average value of the current block can be calculated based on the transform coefficient, the quantized transform coefficient, and the like in place of the DC coefficient.
[0400] In addition, in the previous example, it has been described about determining the residual value of quantization parameter in the coding process. However, it is also possible to determine the residual value of the quantization parameter in the decoding process in the same way as in the coding process. It is also possible for the encoder to signal information that specifies a look-up table used to derive the residual value of quantization parameter in the decoder. In this In this case, the decoder may derive a quantization parameter using a search table specified by the information.
[0401] As another example, the residual value of the quantization parameter determined in the coding process can be signaled to the decoder through a bit stream.
[0402] Although the above-described embodiments have been described based on a series of stages or flow diagrams, they do not limit the order of the time series of the invention, and may be performed simultaneously or in different orders as necessary. In addition, each of the components (e.g., units, modules, etc.) that constitute the block diagram in the embodiments described above may be implemented by a hardware or software device, and a plurality of components. Or a plurality of components can be combined and implemented by a single hardware or software device. The above described embodiments can be implemented in the form of program instructions that can be executed through various computer components and recorded on a computer readable recording medium. The computer readable recording medium may include one of, or a combination of program commands, data files, data structures, and the like. Examples of computer readable medium include magnetic means such as hard drives, flexible disks and magnetic tape, optical recording medium such as CD-ROM and DVD, magneto-optical means such as optical disks, media, and hardware devices specifically configured for store and execute program instructions such as ROM, RAM, flash memory, and the like. The hardware device may be configured to operate as one or more software modules to perform the process according to the present invention, and vice versa.
[0403] Industrial applicability
[0404] The present invention can be applied to electronic devices that can encode / decode a video.
权利要求:
Claims (11)
[1]
1. A method of decoding a video, comprising the method:
decoding indexed information specifying a transform type set for a current transform block from a bit stream;
determining, based on the indexed information, the set type of transform for the current transform block between a plurality of sets type of transform;
determine a type of transform of the current transform block based on the set type of determined transform; Y
perform a reverse transform for the current transform block based on the type of transform of the current transform block.
[2]
2. The method of claim 1, wherein the type of transform of the current transform block is determined to one of the transform type candidates included in the given type of transform set.
[3]
3. The method of claim 2, wherein determining one of the transform type candidates as the transform type of the current transform block is based on a size of the current transform block.
[4]
4. The method of claim 1, wherein determining the type set of transform for the current transform block comprises:
determine a set type of horizontal transform for a horizontal direction of the current transform block; Y,
determine a set type of vertical transform for a vertical direction of the current transform block.
[5]
5. The method of claim 1, wherein a number or type of candidate type of transform included in the given type of transform set is different from another of the plurality of sets type of transform.
[6]
6. A method of encoding a video, comprising the method:
determining a type set of transform for a current transform block between a plurality of transform type sets;
determine a type of transform of the current transform block based on the set type of determined transform;
perform a transform for the current transform block based on the type of transform of the current transform block; Y,
encoding indexed information that specifies the set type of transform determined between the plurality of sets type of transform.
[7]
The method of claim 6, wherein the transform type of the current transform block is determined to one of the transform type candidates included in the given type of transform set.
[8]
8. The method of claim 7, wherein determining one of the transform type candidates as the transform type of the current transform block is based on a block size of current transform.
[9]
9. The method of claim 6, wherein determining the type set of transform for the current transform block comprises:
determine a set type of horizontal transform for a horizontal direction of the current transform block; Y,
determine a set type of vertical transform for a vertical direction of the current transform block.
[10]
10. An apparatus for decoding a video, the apparatus comprising:
a decoding unit for decoding indexed information specifying a transform type set of a current transform block; Y
a reverse transform unit for determining, based on the indexed information, the set type of transform for the current transform block between a plurality of transform type sets
to determine a type of transform of the current transform block based on the set type of determined transform, and
to perform an inverse transform for the current transform block based on the type of transform of the current transform block.
[11]
11. An apparatus for encoding a video, the apparatus comprising:
a transform unit for determining a set type of transform of a current transform block among a plurality of transform type sets,
to determine a transform type of the current transform block based on the given type of transform set, and
to perform a transform for the current transform block based on the transform type of the current transform block; Y
a coding unit for encoding indexed information specifying the type of transform determined between the plurality of transform type sets.
类似技术:
公开号 | 公开日 | 专利标题
ES2710807B1|2020-03-27|METHOD AND APPARATUS FOR PROCESSING VIDEO SIGNALS
ES2692864B1|2019-10-21|METHOD AND APPARATUS FOR PROCESSING VIDEO SIGNS
ES2710234B1|2020-03-09|Procedure and device for processing video signals
ES2703607B2|2021-05-13|Method and apparatus for processing video signals
ES2699723B2|2020-10-16|METHOD AND APPARATUS TO TREAT A VIDEO SIGNAL
ES2711189A2|2019-04-30|METHOD AND APPARATUS FOR PROCESSING VIDEO SIGNALS |
ES2737874B2|2020-10-16|METHOD AND APPARATUS FOR PROCESSING VIDEO SIGNAL
ES2719132B1|2020-05-05|Procedure and device to process video signals
ES2711474A2|2019-05-03|Method and device for processing video signal
ES2737845B2|2021-05-19|METHOD AND APPARATUS TO PROCESS VIDEO SIGNAL
ES2737843B2|2021-07-15|METHOD AND APPARATUS TO PROCESS A VIDEO SIGNAL
ES2711223A2|2019-04-30|Method and device for processing video signal
ES2711473A2|2019-05-03|Method and apparatus for processing video signal
ES2703458A2|2019-03-08|Video signal processing method and device
ES2711230A2|2019-04-30|Method and apparatus for processing video signal
ES2711209A2|2019-04-30|Method and device for processing video signal
CN112166614A|2021-01-01|Method and apparatus for processing video signal
WO2020175965A1|2020-09-03|Intra prediction-based video signal processing method and device
同族专利:
公开号 | 公开日
US20210105470A1|2021-04-08|
ES2817100A2|2021-04-06|
US20210144375A1|2021-05-13|
EP3439304A1|2019-02-06|
ES2739668B1|2021-12-03|
US10904526B2|2021-01-26|
EP3439304A4|2020-02-26|
ES2710807B1|2020-03-27|
ES2739668A2|2020-02-03|
ES2817100R1|2021-06-03|
KR20180121514A|2018-11-07|
US20190222843A1|2019-07-18|
CN108886613A|2018-11-23|
US20210105471A1|2021-04-08|
ES2739668R1|2020-11-13|
ES2710807R1|2019-07-10|
WO2017171370A1|2017-10-05|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

KR101377527B1|2008-10-14|2014-03-25|에스케이 텔레콤주식회사|Method and Apparatus for Encoding and Decoding Motion Vector in Plural Number of Reference Pictures and Video Encoding/Decoding Method and Apparatus Using Same|
EA024522B1|2009-04-08|2016-09-30|Шарп Кабусики Кайся|Video encoding device and video decoding device|
US9357219B2|2009-04-08|2016-05-31|Sharp Kabushiki Kaisha|Video encoding apparatus and video decoding apparatus|
WO2011145819A2|2010-05-19|2011-11-24|에스케이텔레콤 주식회사|Image encoding/decoding device and method|
GB2486733A|2010-12-24|2012-06-27|Canon Kk|Video encoding using multiple inverse quantizations of the same reference image with different quantization offsets|
WO2012175196A1|2011-06-20|2012-12-27|Panasonic Corporation|Deblocking control by individual quantization parameters|
RU2713857C2|2011-10-17|2020-02-07|Кт Корпорейшен|Video decoding method|
CN107277514B|2011-10-19|2019-08-16|株式会社Kt|The method of decoding video signal|
KR20130049526A|2011-11-04|2013-05-14|오수미|Method for generating reconstructed block|
AU2012232992A1|2012-09-28|2014-04-17|Canon Kabushiki Kaisha|Method, apparatus and system for encoding and decoding the transform units of a coding unit|
JP2014082639A|2012-10-16|2014-05-08|Canon Inc|Image encoder and method of the same|
KR101677406B1|2012-11-13|2016-11-29|인텔 코포레이션|Video codec architecture for next generation video|
MX367546B|2012-12-18|2019-08-27|Sony Corp|Image processing device and image processing method.|
US20140286412A1|2013-03-25|2014-09-25|Qualcomm Incorporated|Intra dc prediction for lossless coding in video coding|
KR101529650B1|2013-07-02|2015-06-19|성균관대학교산학협력단|Selective transform method and apparatus, inverse transform method and apparatus for video coding|
DE102013212873A1|2013-07-02|2015-01-08|Henkel Ag & Co. Kgaa|High fatty acid cleaning composition|
WO2015053115A1|2013-10-11|2015-04-16|ソニー株式会社|Decoding device, decoding method, encoding device, and encoding method|
US10306229B2|2015-01-26|2019-05-28|Qualcomm Incorporated|Enhanced multiple transforms for prediction residual|
KR101601813B1|2015-06-30|2016-03-11|에스케이텔레콤 주식회사|A Video Decoding Method and Apparatus Using Inter Prediction|
US10200719B2|2015-11-25|2019-02-05|Qualcomm Incorporated|Modification of transform coefficients for non-square transform units in video coding|US10863186B2|2016-08-26|2020-12-08|Sharp Kabushiki Kaisha|Image decoding apparatus and image coding apparatus|
KR20180040827A|2016-10-13|2018-04-23|디지털인사이트 주식회사|Video coding method and apparatus using grouped coding uinit syntax|
US10893267B2|2017-05-16|2021-01-12|Lg Electronics Inc.|Method for processing image on basis of intra-prediction mode and apparatus therefor|
WO2019076138A1|2017-10-16|2019-04-25|Huawei Technologies Co., Ltd.|Encoding method and apparatus|
CN110324627A|2018-03-30|2019-10-11|杭州海康威视数字技术股份有限公司|The intra-frame prediction method and device of coloration|
US10728542B2|2018-04-09|2020-07-28|Tencent America LLC|Methods and apparatuses for sub-block motion vector prediction|
WO2019228332A1|2018-05-31|2019-12-05|Huawei Technologies Co., Ltd.|Spatially varying transform with adaptive transform type|
EP3826303A4|2018-08-12|2021-05-26|LG Electronics Inc.|Method and apparatus for processing image signal|
BR112021005238A2|2018-09-20|2021-06-15|Nokia Technologies Oy|a method and apparatus for encoding and decoding digital image/video material|
US11218694B2|2018-09-24|2022-01-04|Qualcomm Incorporated|Adaptive multiple transform coding|
KR20210088661A|2018-11-12|2021-07-14|후아웨이 테크놀러지 컴퍼니 리미티드|Video encoders, video decoders and methods of encoding or decoding pictures|
CN111225206B|2018-11-23|2021-10-26|华为技术有限公司|Video decoding method and video decoder|
WO2020114291A1|2018-12-04|2020-06-11|Huawei Technologies Co., Ltd.|Video encoder, video decoder, and corresponding method|
US10986339B2|2019-02-08|2021-04-20|Tencent America LLC|Method and apparatus for harmonization between transform skip mode and multiple transform selection|
CN112514380A|2019-02-26|2021-03-16|株式会社 Xris|Method for encoding/decoding video signal and apparatus therefor|
CN111669582A|2019-03-09|2020-09-15|杭州海康威视数字技术股份有限公司|Method, encoding end, decoding end and system for encoding and decoding|
US11025937B2|2019-03-16|2021-06-01|Tencent America LLC|Method and apparatus for video coding|
CN113728641A|2019-04-23|2021-11-30|北京字节跳动网络技术有限公司|Conditionally using multiple transform matrices in video coding|
CN110365982B|2019-07-31|2022-01-04|中南大学|Multi-transformation selection accelerating method for intra-frame coding in multipurpose coding|
法律状态:
2019-04-26| BA2A| Patent application published|Ref document number: 2710807 Country of ref document: ES Kind code of ref document: A2 Effective date: 20190426 |
2019-07-10| EC2A| Search report published|Ref document number: 2710807 Country of ref document: ES Kind code of ref document: R1 Effective date: 20190703 |
2020-03-27| FG2A| Definitive protection|Ref document number: 2710807 Country of ref document: ES Kind code of ref document: B1 Effective date: 20200327 |
优先权:
申请号 | 申请日 | 专利标题
KR20160036841|2016-03-28|
KR20160036846|2016-03-28|
KR20160036844|2016-03-28|
PCT/KR2017/003357|WO2017171370A1|2016-03-28|2017-03-28|Method and apparatus for processing video signal|
[返回顶部]