![]() Method and device for processing video signal
专利摘要:
A method for decoding an image according to the present invention may comprise: a step inducing spatial merge candidates of a current block; a step of creating a merge candidate list for the current block on the basis of the spatial merge candidates; a step of obtaining motion information of the current block on the basis of the merge candidate list; and a step of performing motion compensation on the current block by using the motion information. Here, if the current block is not in a predefined shape or is not equal to or greater than a predefined size, the spatial merge candidates of the current bock may be induced base on a block, including the current block, which is in the predefined shape or equal to or greater than the predefined size. 公开号:ES2711474A2 申请号:ES201990018 申请日:2017-08-31 公开日:2019-05-03 发明作者:Bae Keun Lee 申请人:KT Corp; IPC主号:
专利说明:
[0001] [0002] [0003] [0004] OBJECT OF THE INVENTION [0005] The present invention relates to a method and an apparatus for processing a video signal. [0006] [0007] BACKGROUND OF THE INVENTION [0008] Recently, the demands of high resolution and high quality images have increased, such as high definition images (HD) and ultra-high definition (UHD) images in various fields of application. However, image data in high resolution and quality have increased the amount of data compared to conventional image data. Therefore, when image data is transmitted through the use of a medium such as wired and wireless broadband networks, or when image data is stored by the use of a conventional storage medium, the costs of transmission and storage. To solve these problems that occur with the increase in the resolution and quality of the image data, high efficiency image coding / decoding techniques can be used. [0009] [0010] Image compression technologies include various techniques, including: an inter prediction technique for predicting a value of pixels included in a current image from a previous or subsequent image of the current image; an intra prediction technique of predicting a value of pixels included in a current image by using information of pixels in the current image; an entropy coding technique of assigning a short code to a value with a high frequency of occurrence and assignment of a long code to a value with a low frequency of occurrence; etc. The image data can be effectively compressed by the use of said image compression technology, and can be transmitted or stored. [0011] [0012] For its part, along with the demands of images in high resolution, the demands for stereographic image content have also increased, which is a new image service. A video compression technique is being analyzed to effectively provide stereographic image content with high resolution and ultra high resolution. [0013] [0014] Technical problem [0015] An object of the present invention is to provide a method and apparatus for efficiently performing an inverse transformation / transformation in the encoding / decoding of a video signal. [0016] [0017] An object of the present invention is to provide a method and apparatus for adaptively determining a transformation type of a current block from among a plurality of candidates for the type of transformation in the encoding / decoding of a video signal. [0018] [0019] An object of the present invention is to provide a method and an apparatus for separately determining the transformation types of a horizontal transformation and a vertical transformation in the coding / decoding of a video signal. [0020] [0021] The technical objects to be achieved by the present invention are not limited to the aforementioned technical problems. And, other technical problems that are not mentioned will obviously be understood by those skilled in the art from the description that follows. [0022] [0023] DESCRIPTION OF THE INVENTION [0024] A method and apparatus for decoding a video signal according to the present invention can obtain a transformation coefficient of a current block, perform an inverse quantization of the transformation coefficient, determine a transformation set for the current block, determine one of between a plurality of transformation type candidates as a transformation type of the current block, and perform the inverse transformation of the reverse quantized transformation coefficient based on the type of transformation determined. [0025] A method and apparatus for encoding a video signal according to the present invention can obtain a transformation coefficient of a current block, perform an inverse quantization of the transformation coefficient, determine a transformation set for the current block, determine one of between a plurality of candidates of transformation type as a transformation type of the current block, and perform the inverse transformation of the reverse quantized transformation coefficient based on the type of transformation determined. [0026] In the method and apparatus for the encoding / decoding of a video signal according to the present invention, the transformation set of the current block can be determined based on an index information indicating at least one of a plurality of video sets. transformation. [0027] [0028] In the method and apparatus for the encoding / decoding of a video signal according to the present invention, at least one type or number of a candidate of type of transformation for each of the plurality of transformation sets may be different . [0029] [0030] In the method and apparatus for encoding / decoding a video signal according to the present invention, at least one type or number of a candidate of the transformation type included in the transformation set can be determined differently according to the invention. with whether the transformation is allowed or not. [0031] [0032] In the method and apparatus for encoding / decoding a video signal according to the present invention, the inverse transformation can comprise a horizontal transformation and a vertical transformation and a transformation set can be determined independently for the horizontal transformation and a set of transformation for vertical transformation. [0033] [0034] In the method and apparatus for the encoding / decoding of a video signal according to the present invention, the transformation set for the horizontal transformation and the transformation set for vertical transformation can be determined according to an intra prediction mode of the current block. [0035] [0036] In the method and apparatus for encoding / decoding a video signal according to the present invention, the transformation type of the current block can be determined adaptively based on at least one of: a size, a shape or a number of samples of the current block. [0037] [0038] The features briefly summarized above for the present invention are only illustrative aspects of the detailed description of the invention which follows, but do not limit the scope of the invention. [0039] [0040] Advantageous effects [0041] According to the present invention, an inverse transformation / transformation for an objective coding / decoding block can be effected efficiently. [0042] [0043] According to the present invention, a transformation type of a current block from a plurality of candidate transformation types can be adaptively determined. [0044] [0045] According to the present invention, the transformation types of a horizontal transformation and a vertical transformation can be determined separately. [0046] [0047] The effects that can be obtained by the present invention are not limited to the aforementioned effects, and other effects not mentioned can be clearly understood by those skilled in the art from the description that follows. [0048] [0049] BRIEF DESCRIPTION OF THE FIGURES [0050] FIG. 1 is a block diagram illustrating a device for encoding a video according to an embodiment of the present invention. [0051] FIG. 2 is a block diagram illustrating a device for decoding a video according to an embodiment of the present invention. [0052] [0053] FIG. 3 is a diagram illustrating an example of hierarchically partitioning an encoding block based on a tree structure according to an embodiment of the present invention. [0054] [0055] FIG. 4 is a diagram illustrating a type of partition in which a binary tree-based partitioning is allowed according to an embodiment of the present invention. [0056] [0057] FIG. 5 is a diagram illustrating an example in which only a binary tree-based partition of a predetermined type according to an embodiment of the present invention is allowed. [0058] [0059] FIG. 6 is a diagram for explaining an example in which the information related to the allowed number of partitions in binary tree is encoded / decoded; according to a realization to which the present invention is applied. [0060] [0061] FIG. 7 is a diagram illustrating a partition mode applicable to a coding block according to a realization of the present invention. [0062] [0063] FIG. 8 is a flow chart illustrating processes for obtaining a residual sample according to a realization of the present invention. [0064] [0065] FIG. 9 is a diagram that illustrates, for 33 modes of intra prediction, whether a vertical transformation and a horizontal transformation use the same transformation set. [0066] [0067] DETAILED DESCRIPTION OF THE INVENTION [0068] A variety of modifications to the present invention can be made and there are various embodiments of the present invention, examples of which will now be provided with reference to the drawings and will be described in detail. However, the present invention is not limited thereto, and exemplary embodiments can be constructed including all modifications, equivalents, or substitutes in a technical concept and a technical scope of the present invention. Similar reference numbers refer to similar elements in the description of the drawings. [0069] [0070] The terms used in the specification, "first", "second", etc. they can be used to describe various components, but the components do not have to be constructed by limiting themselves to the terms. The terms are only used to differentiate a component from other components. For example, the "first" component may be referred to as the "second" component without departing from the scope of the present invention, and the "second" component may similarly be referred to as the "first" component. The expression "and / or" includes a combination of a plurality of articles or any one of a plurality of terms. [0071] [0072] It will be understood that when reference is made to an element simply as being "connected to" or "coupled to" another element without being "directly connected to" or "directly coupled to" another element in the present description, it may be "directly connected to "Or" directly coupled to "another element or connecting to or coupling to another element, having the other intermediate element between them. For him otherwise, it should be understood that when referring to an element as being "directly coupled" or "directly connected" to another element, there are no intervening elements present. [0073] [0074] The terms used in the present specification are used merely to describe particular embodiments, and are not intended to limit the present invention. An expression used in the singular encompasses the expression in the plural, unless it has a clearly different meaning in the context. In the present specification, it is to be understood that terms such as "including", "having", etc. it is intended that they indicate the existence of characteristics, numbers, stages, actions, elements, parts or combinations of them disclosed in the specification, and are not intended to exclude the possibility that one or more other characteristics may exist or be added, numbers, stages, actions, elements, parts or combinations thereof. [0075] [0076] Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings. Hereinafter, the same constituent elements of the drawings are indicated by the same reference numbers, and a repeated description of the same elements will be omitted. [0077] [0078] FIG. 1 is a block diagram illustrating a device for encoding a video according to an embodiment of the present invention. [0079] [0080] With reference to FIG. 1, the device 100 for encoding a video may include: a partitioning module of the image 110, prediction modules 120 and 125, a transformation module 130, a quantization module 135, a redisposition module 160, a module Entropica coding 165, an inverse quantization module 140, a reverse transformation module 145, a filter module 150, and a memory 155. [0081] [0082] The constituent parts shown in FIG. 1 are displayed independently so as to represent different characteristics of each other in a device for encoding a video. Therefore, this does not mean that each constituent part is constituted in a separate hardware or software constituent unit. In other words, each constituent part includes each of the parties constituents as appropriate. Therefore, at least two constituent parts of each constituent part can be combined to form a constituent part or a constituent part can be divided into a plurality of constituent parts to perform each function. The embodiment in which each constituent part is combined and the embodiment in which a constituent part is divided are also included in the scope of the present invention, if it does not depart from the essence of the present invention. [0083] [0084] Also, some of the constituents may not be indispensable constituents that perform essential functions of the present invention but be selective constituents that improve only the performance thereof. The present invention can be implemented by including only the essential constituent parts to implement the essence of the present invention except the constituents used in the performance improvement. The structure that includes only the indispensable constituents except the selective constituents used only in performance improvement are also included in the scope of the present invention. [0085] [0086] The image partitioning module 110 may partition an input image into one or more processing units. In this case, the processing unit may be a prediction unit (PU), a transformation unit (TU), or a coding unit (CU). The image partitioning module 110 can partition an image into combinations of multiple coding units, prediction units and transformation units, and can encode an image by selecting a combination of coding units, prediction units and transformation units. with a predetermined criterion (for example, cost function). [0087] [0088] For example, an image can be partitioned into multiple units of coding. A recursive tree structure, such as a quadruple tree structure, can be used for partitioning an image into coding units. A coding unit that has been partitioned into other coding units by being an image or a larger coding unit such as a root can be partitioned with child nodes corresponding to the number of partitioned coding units. A coding unit that is no longer partitioned by a predetermined limitation serves as a leaf node. That is, when it is assumed that it is only possible to partition squares for a coding unit, one coding unit can be partitioned into four other coding units at most. [0089] [0090] Hereinafter, in the embodiment of the present invention, the coding unit can mean a unit that performs coding, or a unit that performs decoding. [0091] [0092] A prediction unit can be one of partitioned partitions in a square or a rectangular shape having the same size as a simple coding unit, or a prediction unit can be one of the partitioned partition units so that they have a shape / different size of a simple coding unit. [0093] [0094] When a prediction unit subjected to intra prediction is generated based on a coding unit and the coding unit is not the smallest coding unit, the intra prediction can be performed without partitioning of the coding unit into multiple prediction units NxN. [0095] [0096] The prediction modules 120 and 125 may include an inter-prediction module 120 that performs the inter-prediction and an intra-prediction module 125 that performs the intra-prediction. It can be determined whether to perform an inter prediction or an intra prediction for the prediction unit, and detailed information (e.g., an intra prediction mode, a motion vector, a reference image, etc.) can be determined according to each prediction method. In this case, the processing unit subjected to prediction may be different from the processing unit for which the prediction method and detailed content is determined. For example, the prediction method, the prediction mode, etc. they can be determined by the prediction unit, and the prediction can be made by the transformation unit. A residual value (residual block) between the generated prediction block and an original block can be input to the transformation module 130. Also, the information of the prediction mode, the information of the motion vector, etc. used for prediction can be encoded with the residual value by the entrope coding module 165 and can be transmitted to the device for decoding a video. When a particular coding mode is used, it can be transmitted to a device for video decoding by encoding the original block as it is without generating the prediction block through the prediction modules 120 and 125. [0097] [0098] The inter-prediction module 120 can predict the prediction unit based on information from at least one of a previous image or a subsequent image of the current image, or it can predict the prediction unit based on information of some regions encoded in the image current, in some cases. The inter-prediction module 120 may include an interpolation module of the reference image, a movement prediction module, and a movement compensation module. [0099] [0100] The reference image interpolation module may receive information from a reference image from the memory 155 and may generate pixel information of a whole pixel or less than a whole pixel from the reference image. In the case of luma pixels, an 8-tap DCT-based interpolation filter having different filter coefficients can be used to generate pixel information having to be a whole pixel or less than an integer pixel in a unit of 1 / 4 pixel. In the case of chroma signals, a 4-tap DCT-based interpolation filter having a different filter coefficient can be used to generate the pixel information of a whole pixel or less than an integer pixel in a unit of 1 / 8 pixel. [0101] [0102] The movement prediction module can make a prediction of the movement based on the reference image interpolated by the interpolation module of the reference image. As methods for calculating a motion vector, various methods can be used, such as a block matching algorithm based on full search (FBMA), a three-stage search (TSS), a new three-stage search algorithm (NTS) , etc. The motion vector can have a motion vector value in a unit of 1/2 of a pixel or 1/4 of a pixel based on an interpolated pixel. The movement prediction module can predict a current prediction unit by changing the motion prediction method. As methods of motion prediction, various methods can be used, such as a jump method, a fusion method, an AMVP method (advanced motion vector prediction), an intra block copy method, etc. [0103] [0104] The intra prediction module 125 can generate a prediction unit based on in the reference pixel information contiguous to a current block that is information of pixels in the current image. When the neighboring block of the current prediction unit is a block subjected to inter prediction and therefore a reference pixel is a pixel subjected to inter prediction, the reference pixel included in the block subjected to the inter prediction can be replaced with information from the reference pixel of a neighboring block subjected to intra prediction. That is, when a reference pixel is not available, at least one reference pixel of available reference pixels can be used in place of non-available reference pixel information. [0105] [0106] Prediction modes in intra prediction include a directional prediction mode that uses information from the reference pixel depending on a direction of prediction and a non-directional prediction mode that does not use directional information in the realization of the prediction. A mode for the prediction of luminance information may be different from a mode for the prediction of chrominance information, and for predicting chrominance information, the information of the intra prediction mode used to predict the luminance information or the information may be used. of the predicted luminance signal. [0107] [0108] In the realization of the intra prediction, when the size of the prediction unit is the same as the size of the transformation unit, the intra prediction can be carried out on the prediction unit based on pixels positioned on the left, on the left above , and on top of the prediction unit. However, in the realization of the intra prediction, when the size of the prediction unit is different from the size of the transformation unit, the intra prediction can be performed using a reference pixel based on the transformation unit. Also, the use of intra prediction of NxN partitioning can be used only for the smallest coding unit. [0109] [0110] In the intra prediction method, a prediction block can be generated after applying an AIS filter (intra-adaptive smoothing) to a reference pixel depending on the prediction modes. The type of the AIS filter applied to the reference pixel may vary. To perform the intra prediction method, an intra prediction mode of the current prediction unit can be predicted from the intra prediction mode of the prediction unit contiguous to the current prediction unit. In the prediction of the prediction mode of the current prediction unit by using of information predicted from the contiguous prediction unit, when the intra prediction mode of the current prediction unit is the same as the intra prediction mode of the contiguous prediction unit, information may be transmitted indicating that the prediction modes of the current prediction unit and the contiguous prediction unit are equal to each other using a predetermined marker information. When the prediction mode of the current prediction unit is different from the prediction mode of the contiguous prediction unit, an entrope coding can be performed to encode the information of the prediction mode of the current block. [0111] [0112] Also, a residual block that includes information about a residual value that is a different one between the prediction unit under prediction and the original block of the prediction unit, can be generated based on prediction units generated by the prediction modules 120 and 125 The residual block generated can be introduced into the transformation module 130. [0113] [0114] The transformation module 130 can transform the residual block including the information on the residual value between the original block and the prediction unit generated by the prediction modules 120 and 125 by using a transformation method, such as a transformation of discrete cosine (DCT), discrete sinus transformation (DST) and KLT. It can be determined whether to apply DCT, DST or KLT to transform the residual block based on the information of the intra prediction mode of the prediction unit used to generate the residual block. [0115] [0116] The quantization module 135 can quantize values transformed to a frequency domain by the transformation module 130. The quantization coefficients may vary depending on the block or importance of an image. The values calculated by the quantization module 135 can be provided to the inverse quantization module 140 and the redisposition module 160. [0117] [0118] The redisposition module 160 can rearrange the coefficients of the quantized residual values. [0119] [0120] The redisposition module 160 can change a coefficient in the form of a two-dimensional block into a coefficient in the form of a one-dimensional vector through a method of scanning coefficients. For example, the redisposition module 160 can scan from a DC coefficient to a coefficient in the high frequency domain using a zigzag scanning method so that it changes the coefficients to be in the form of one-dimensional vectors. Depending on the size of the transformation unit and the intra prediction mode, scanning in the vertical direction can be used in which the coefficients in the form of two-dimensional blocks are scanned in the direction of columns or the scan in the horizontal direction where the coefficients in the form of two-dimensional blocks are scanned in that row direction instead of a zigzag scan. That is, the scanning method used between zigzag scanning, vertical scanning and horizontal scanning can be determined depending on the size of the transformation unit and the intra prediction mode. [0121] [0122] The entrope coding module 165 may perform an entropy coding based on the values calculated by the redisposition module 160. Entrope coding may use various coding methods, e.g., Golomb exponential coding, context-adaptive variable length coding (CAVLC) , and adaptive binary arithmetic coding to the context (CABAC). [0123] [0124] The entrope coding module 165 may encode a variety of information, such as information of the residual value coefficient and information of the block type of the coding unit, information of the prediction mode, information of the unit of partition, information of the unit of prediction, information of the transformation unit, information of the motion vector, information of the reference frame, block interpolation information, filtering information, etc. from the redisposition module 160 and from the prediction modules 120 and 125. [0125] [0126] The entropy coding module 165 can entropically encode the coefficients of the coding unit introduced from the redisposition module 160. [0127] [0128] The inverse quantization module 140 can inversely quantize the values quantized by the quantization module 135 and the inverse transformation module 145 can transform inversely the values transformed by the transformation module 130. The residual value generated by the inverse quantization module 140 and by the inverse transformation module 145 can be combined with the prediction unit predicted by the movement estimation module, a motion compensation module, and the intra prediction module of the prediction modules 120 and 125 so that a reconstructed block can be generated. [0129] [0130] The filter module 150 may include at least one of a deblocking filter, a deviation correction unit, and an adaptive loop filter (ALF). [0131] [0132] The unblocking filter can eliminate the block distortion that occurs due to the boundaries between the blocks in the reconstructed image. To determine whether to perform unlock, the pixels included in various rows or columns in the block can be a basis for determining whether to apply the unlock filter to the current block. When an unblocking filter is applied to the block, a strong filter or a weak filter may be applied depending on the required unblocking filter intensity. Also, in the application of the unblocking filter, a filtering in the horizontal direction and a filtering in the vertical direction can be processed in parallel. [0133] [0134] The correction module of the deviation can correct the deviation with the original image in a unit of a pixel in the image subjected to unblocking. To perform the correction of deviation on a particular image, it is possible to use a deviation application method in consideration of the edge information of each pixel or a method of partition of a pixel of an image in the predetermined number of regions, determining a region to be submitted to the realization of the deviation, and applying the deviation to the determined region. [0135] [0136] Adaptive loop filtering (ALF) can be performed based on the value obtained by comparing the filtered reconstructed image and the original image. The pixels included in the image can be divided into predetermined groups, a filter can be determined to be applied to each of the groups, and filtering can be done individually for each group. Information on whether to apply ALF and a luminance signal can be transmitted by coding units (CU). The shape and filter coefficient of a filter for ALF can vary depending on each block. Also, the filter for ALF can be applied in the same way (fixed form) regardless of the characteristics of the target application block. [0137] [0138] The memory 155 can store the reconstructed block or image calculated through of the filter module 150. The stored reconstructed block or image can be provided to the prediction modules 120 and 125 in the realization of the inter prediction. [0139] [0140] FIG. 2 is a block diagram illustrating a device for decoding a video according to an embodiment of the present invention. [0141] [0142] With reference to FIG. 2, the device 200 for decoding a video may include: an entropy decoding module 210, a redisposition module 215, a reverse quantization module 220, a reverse transformation module 225, prediction modules 230 and 235, a module for filter 240 and a memory 245. [0143] [0144] When a video bitstream is input from the device for video coding, the input bitstream can be decoded according to an inverse process of the device for video encoding. [0145] [0146] The entrope decoding module 210 may perform the entrope decoding according to a reverse process to the entrope coding by the entrope coding module of the video encoding device. For example, in correspondence with the methods performed by the device for video encoding, various methods can be applied, such as Golomb exponential coding, context-adaptive variable length coding (CAVLC) and contextual adaptive binary arithmetic coding (CABAC). [0147] [0148] The entrope decoding module 210 can decode the information on intra prediction and inter prediction made by the device for video encoding. [0149] [0150] The redisposition module 215 can perform the redisposition of the entrope decoder bit stream by the entrope decoding module 210 based on the redisposition method used in the video encoding device. The redisposition module can reconstruct and rearrange the coefficients in the form of one-dimensional vectors for the coefficient in the form of two-dimensional blocks. The redisposition module 215 can receive information regarding the scanning of coefficients performed by the device for video encoding and can perform the redisposition through a method of inverse scanning of the coefficients based on the scanning order performed on the device for video encoding. [0151] [0152] The inverse quantization module 220 can perform the inverse quantization based on a quantization parameter received from the video encoding device and the redistributed coefficients of the block. [0153] [0154] The reverse transformation module 225 can perform the inverse transformation, that is, inverse DCT, inverse DST and inverse KLT, which is the inverse process of the transformations, ie, DCT, DST and KLT, performed by the transformation module on the result of the quantification by the device for video encoding. The inverse transformation can be done based on a transfer unit determined by the device for video encoding. The reverse transformation module 225 of the video decoding device can selectively perform transformation schemes (eg, DCT, DST and KLT) depending on multiple pieces of information, such as the prediction method, the size of the current block, the direction of prediction, etc. [0155] [0156] The prediction modules 230 and 235 can generate a prediction block based on the information on the generation of the prediction block received from the entropy decoding module 210 and the information of the previously decoded block or image received from the memory 245. [0157] [0158] As described above, as in the operation of the device for video coding, in the realization of the intra prediction, when the size of the prediction unit is the same as the size of the transformation unit, intra prediction about the prediction unit based on the pixels positioned on the left, the left above, and the top of the prediction unit. In carrying out the intra prediction, when the size of the prediction unit is different from the size of the transformation unit, the intra prediction can be performed using a reference pixel based on the transformation unit. Also, intra prediction can be used using NxN partitioning only for the smallest coding unit. [0159] [0160] The prediction modules 230 and 235 may include a determination module of the prediction unit, an inter prediction module and an intra prediction module. The module of determination of the prediction unit can receive a diversity of information, such as information of the prediction unit, information of the prediction mode of an intra prediction method, information on prediction of movement of an inter prediction method, etc. from the entropy decoding module 210, it can divide a current coding unit into prediction units, and can determine whether an inter prediction or intra prediction is performed on the prediction unit. By using the information required in the inter prediction of the current prediction unit received from the device for video coding, the inter-prediction module 230 can perform the inter prediction on the current prediction unit based on the information of at least one of a previous image or a subsequent image of the current image that includes the current prediction unit. Alternatively, the inter prediction can be performed based on the information of some pre-reconstructed regions in the current image including the current prediction unit. [0161] [0162] To perform inter prediction, it can be determined by the encoding unit which of a hop mode, a merge mode, an AMVP mode and an inter block copy mode is used as the prediction method of movement of the prediction unit. included in the codification unit. [0163] [0164] The intra prediction module 235 can generate a prediction block based on pixel information in the current image. When the prediction unit is a prediction unit subjected to intra prediction, the intra prediction can be performed based on information of the intra prediction mode of the prediction unit received from the device for video coding. The intra-prediction module 235 may include an intra-adaptive smoothing filter (AIS), a reference pixel interpolation module, and a DC filter. The AIS filter performs filtering on the reference pixel of the current block and it can be determined if the filter is applied depending on the prediction mode of the current prediction unit. The AIS filtering can be performed on the reference pixel of the current block by using the prediction mode of the prediction unit and the AIS filter information received from the device for video coding. When the prediction mode of the current block is a mode in which AIS filtering is not performed, the AIS filter can not be applied. [0165] [0166] When the prediction mode of the prediction unit is a prediction mode in which intra intra prediction is performed based on the pixel value obtained by interpolating the pixel of the reference, the interpolation module of the reference pixel can interpolate the reference pixel to generate the reference pixel of a whole pixel or less than an integer pixel. When the prediction mode of the current prediction unit is a prediction mode in which a prediction block is generated without interpolation of the reference pixel, the reference pixel can not be interpolated. The DC filter can generate a prediction block through the filtering when the prediction mode of the current block is a DC mode. [0167] [0168] The reconstructed block or image can be provided to the filter module 240. The filter module 240 can include the deblocking filter, the deviation correction module and the ALF. [0169] [0170] The information on whether or not the unblocking filter is applied to the corresponding block or image and the information of which between a strong filter and a weak filter are applied when the unblocking filter is applied, can be received from the device for video coding . The unlock filter of the video decoding device may receive information about the unlock filter from the device for video coding, and may perform unblocking filtering on the corresponding block. [0171] [0172] The deviation correction module can perform the correction of the deviation on the reconstructed image based on the type of deviation correction and the information of the deviation value applied to the image in the realization of the codification. [0173] [0174] The ALF can be applied to the codification unit based on information on whether to apply the ALF, information on the ALF coefficient, etc. received from the device for video coding. The ALF information can be provided by being included in a particular set of parameters. [0175] [0176] The memory 245 can store the reconstructed image or block for use as a reference image or block, and can provide the reconstructed image to an output module. [0177] As described above, in the realization of the present invention, for convenience of explanation, the coding unit is used as a term that represents a unit for codification, but the coding unit can serve as a unit to perform decoding as well as coding. [0178] [0179] In addition, a current block may represent an objective block to be encoded / decoded. And, the current block can represent a coding tree block (or a coding tree unit), a coding block (or a coding unit), a transformation block (or a transformation unit), a block of prediction (or a prediction unit), or the like depending on a coding / decoding stage. [0180] [0181] An image can be encoded / decoded by its division into base blocks that have a square shape or a non-square shape. At this time, the base block can be referred to as a coding tree unit. The coding tree unit can be defined as a coding unit of the largest size allowed within a sequence or fraction. The information as to whether the unit of the coding tree has a square shape or has a non-square shape or information in relation to the size of the unit of the coding tree can be signaled through a set of parameters of the sequence, a set of parameters of the image, or a header of the fraction. The unit of the coding tree can be divided into smaller size partitions. At this time, if it is assumed that a depth of a partition generated by the division of the unit of the coding tree is 1, a depth of a partition generated by dividing the partition having a depth 1 can be defined as 2. This is, a partition generated by the division of a partition having a depth k in the coding tree unit can be defined as having a depth k + 1. [0182] [0183] A partition of arbitrary size generated by the division of a coding tree unit can be defined as a coding unit. The coding unit may be recursively divided or divided into base units for the realization of prediction, quantization, transformation, or loop filtering, and the like. For example, a partition of arbitrary size generated by the division of the coding unit can be defined as a coding unit, or it can be defined as a transformation unit or a prediction unit, which is a base unit for the prediction embodiment, quantification, transformation or filtering in loop and the like. [0184] The partition of a codification tree unit or a coding unit may be performed based on at least one of a vertical line and a horizontal line. In addition, the number of vertical lines or horizontal lines that partition the unit of the coding tree or the coding unit may be at least one or more. For example, the code tree unit or the coding unit can be divided into two partitions using a vertical line or a horizontal line, or the code tree unit or the coding unit can be divided into three partitions using two vertical or vertical lines. two horizontal lines. Alternatively, the coding tree unit or the coding unit may be partitioned into four partitions having a length and width of 1/2 using a vertical line and a horizontal line. [0185] [0186] When a coding tree unit or a coding unit is divided into a plurality of partitions using at least one vertical line or at least one horizontal line, the partitions may have a uniform size or a different size. Alternatively, any partition can have a different size from the rest of the partitions. [0187] [0188] In the embodiments described below, it is assumed that a coding tree unit or a coding unit is divided into a quadruple tree structure or a binary tree structure. However, it is also possible to divide a coding tree unit or a coding unit using a greater number of vertical lines or a greater number of horizontal lines. [0189] [0190] FIG. 3 is a diagram illustrating an example of hierarchically partitioning an encoding block based on a tree structure according to a realization of the present invention. [0191] [0192] An input video signal is decoded in predetermined block units. Said default unit for the decoding of the input video signal is a coding block. The coding block can be a unit that performs intra / inter prediction, transformation and quantification. In addition, a prediction mode (e.g., an intra prediction mode or an inter prediction mode) is determined in a unit of a coding block, and the prediction blocks included in the coding block can share the prediction mode. determined. The coding block can be a square or non-square block that has a size arbitrary in the range of 8x8 to 64x64, or it can be a square or non-square block that has a size of 128x128, 256x256 or more. [0193] [0194] Specifically, the coding block can be partitioned hierarchically based on at least one of a quadruple tree and a binary tree. In this case, quadruple tree-based partitioning means that a 2Nx2N coding block is partitioned into four NxN coding blocks, and binary tree-based partitioning can mean that one block of coding is partitioned into two blocks of coding. Even if partitioning is performed based on binary tree, there may be a square-shaped coding block at the lowest depth. [0195] [0196] Partitioning based on binary tree can be performed symmetrically or asymmetrically. The partitioned coding block based on the binary tree can be a square block or a non-square block, such as a rectangular shape. For example, a partition type in which binary tree-based partitioning is allowed may comprise at least one of a symmetric type of 2NxN (non-square coding unit of horizontal direction) or Nx2N (non-square address coding unit) vertical), symmetric type of nLx2N, nRx2N, 2NxnU, or 2NxnD. [0197] [0198] Partitioning based on a binary tree can be limited to a partition between a symmetric type or a symmetric one. In this case, the construction of the unit of the coding tree with square blocks can correspond to a partitioning of CU in quadruple tree, and the construction of the unit of the coding tree with symmetrical non-square blocks can correspond to a partitioning in binary tree . The construction of the coding tree unit with square blocks and symmetrical non-square blocks may correspond to CU partitioning in quadruple and binary tree. [0199] [0200] Partitioning based on binary tree can be done on a block of coding where quadruple tree-based partitioning is no longer performed. Partitioning based on quadruple tree can no longer be performed on the partitioned coding block based on the binary tree. [0201] [0202] Additionally, partitioning of a lower depth can be determined depending on a type of partition of a higher depth. For example, if partitioning based on a binary tree is allowed in two or more depths, only the same type as the binary tree partitioning of the upper depth in the lower depth can be allowed. For example, if the binary tree-based partitioning in the upper depth is done with the type 2NxN, the partitioning based on the binary tree in the lower depth is also done with the type 2NxN. Alternatively, if the binary tree-based partitioning at the upper depth is done with the Nx2N type, the binary tree-based partitioning at the lower depth is also done with the Nx2N type. [0203] [0204] On the contrary, it is also possible to allow, at a lower depth, only a different type of a binary tree partitioning of a higher depth. [0205] [0206] It may be possible to limit only a specific type of binary tree based on the partitioning to be used for sequence, fraction, unit of the encoding tree or coding unit. As an example, only a 2NxN type or a Nx2N type of partitioning based on the binary tree can be allowed for the unit of the encoding tree. An available partition type can be predefined in an encoder or decoder. Or information about the type of partition available or about the type of partition not available can be encoded and signaled below through a bit stream. [0207] [0208] FIG. 5 is a diagram illustrating an example in which only one specific type of partitioning based on a binary tree is allowed. FIG. 5A shows an example in which only one Nx2N type of partitioning based on a binary tree is allowed, and FIG. [0209] 5B shows an example where only a 2NxN type of binary tree partitioning is allowed. To implement adaptive partitioning based on quadruple tree or binary tree, information indicating quadruple tree-based partitioning can be used, information about the size / depth of the coding block that is allowed in quadruple tree-based partitioning, information indicating partitioning based on binary tree, information about the size / depth of the code block that are allowed in the binary tree-based partitioning, information about the size / depth of the code block that is not allowed in the binary tree-based partitioning, information about whether the Partitioning based on binary tree is done in a vertical direction or a horizontal direction, etc. [0210] In addition, information on the number of times a binary tree partition is allowed, a depth in which binary tree partitioning is allowed, or the number of depths in which binary tree partitioning is allowed can be obtained from a unit of the codification tree or a specific codification unit. The information may be encoded in a unit of a coding tree unit or a coding unit, and may be transmitted to a decoder through a bit stream. [0211] [0212] For example, a syntax "max_binary_depth_idx_minus1" that indicates a maximum depth at which binary tree partitioning is allowed can be encoded / decoded through a bit stream. In this case, max_binary_depth_idx_minus1 1 can indicate the maximum depth at which binary tree partitioning is allowed. [0213] [0214] With reference to the example shown in FIG. 6, in FIG. 6 binary tree partitioning has been done for a coding unit that has a depth of 2 and a coding unit that has a depth of 3. Consequently, at least one of the information indicates the number of times it is has done the partitioning in binary tree in the unit of the codification tree (that is, 2 times), information indicating the maximum depth that has been allowed in the partitioning in binary tree in the unit of the codification tree (that is, depth 3 ), or the number of depths in which binary tree partitioning has been performed in the unit of the encoding tree (ie, 2 (depth 2 and depth 3)) can be encoded / decoded through a bit stream. [0215] [0216] As another example, at least one of the information on the number of times that primary tree partitioning is allowed, the depth at which primary tree partitioning is allowed or the number of depths allowed for tree partitioning. primary can be obtained for each sequence or each fraction. For example, the information may be encoded in a unit of a sequence, an image, or a fraction unit and transmitted through a bit stream. Consequently, at least one of the number of partitions of the binary tree in a first fraction, the maximum depth at which binary tree partitioning is allowed in the first fraction, or the number of depths in which tree partitioning is allowed primary is made in the first fraction may be different from a second fraction. For example, in the first fraction, tree partitioning primary can be allowed only for one depth, while the second fraction, binary partitioning can be allowed for two depths. [0217] [0218] As another example, the number of times that primary tree partitioning is allowed, the depth at which primary tree partitioning is allowed or the number of depths allowed for primary tree partitioning can be set differently than according to a time-level identifier (TemporalID) of a fraction or an image. In this case, the temporary level identifier (TemporalID) is used to identify each of a plurality of video layers that have a scalability of at least one spatial, temporal or quality view. [0219] [0220] As shown in FIG. 3, the first coding block 300 with the depth of partition equal to k can be partitioned into multiple second blocks of coding based on the quadruple tree. For example, the second coding blocks 310 to 340 can be square blocks having half the width and half the height of the first coding block, and the partition depth of the second coding block can be increased to k + 1. [0221] [0222] The second coding block 310 with the partition depth of k + 1 can be partitioned into multiple third coding blocks with a partition depth of k + 2. The partitioning of the second coding block 310 can be done by selectively using one of the quadruple tree and the binary tree depending on a partitioning method. In this case, the partition method can be determined based on at least one of the information indicating the partitioning based on the quadruple tree and the information indicating the partitioning based on the binary tree. [0223] [0224] When the second coding block 310 is partitioned based on the quadruple tree, the second coding block 310 can be partitioned into four third coding blocks 310a having half the width and half the height of the second coding block, and the partition depth of the third coding block 310a can be increased to k + 2. On the contrary, when the second coding block 310 is partitioned based on the binary tree, the second coding block 310 can be partitioned into two third blocks of coding. In this case, each of the two third coding blocks can be a non-square block having one of half the width and half the height of the second coding block, and the partition depth can be increased to k + two. The second block of coding can be determined as a non-square block of a horizontal direction or a vertical direction depending on a partitioning direction, and the partitioning direction can be determined based on the information on whether binary tree-based partitioning is performed in a direction vertical or in a horizontal direction. [0225] [0226] On the other hand, the second coding block 310 can be determined as a leaf coding block that is no longer partitioned based on the quadruple tree or the binary tree. In this case, the sheet coding block can be used as a prediction block or a transformation block. [0227] [0228] As the partitioning of the second coding block 310, the third coding block 310a can be determined as a sheet coding block, or it can be further partitioned based on the quadruple tree or the binary tree. [0229] [0230] For its part, the third partition block 310b partitioned based on the primary tree can be further partitioned into coding blocks 310b-2 of a vertical direction or coding blocks 310b-3 of a horizontal direction based on the binary tree, and depth of partition of the relevant coding blocks can be increased to k + 3. Alternatively, the third coding block 310b can be determined as a sheet coding block 310b-1 that is no longer partitioned based on the binary tree. In this case, the coding block 310b-1 can be used as a prediction block or a transformation block. However, the above partitioning process can be performed in a limited manner based on at least one of the size / depth information of the coding block that quadruple tree-based partitioning is allowed, information on the size / depth of the block encoding that is allowed binary tree-based partitioning, and information about the size / depth of the encoding block that is not allowed binary tree-based partitioning. [0231] [0232] A number of a candidate that represents a size of a coding block it may be limited to a predetermined number, or a size of a coding block in a predetermined unit may have a fixed value. As an example, the size of the coding block in a sequence or in an image can be limited to 256x256, 128x128 or 32x32. The information indicating the size of the coding block in the sequence or in the image can be signaled through a sequence header or an image header. [0233] [0234] As a result of partitioning based on a quadruple tree and a binary tree, a coding unit can be represented as a square or rectangular shape of an arbitrary size. [0235] [0236] A coding block is encoded using at least one of a hop mode, intra prediction, inter prediction, or a hop method. Once the coding block is determined, a prediction block can be determined through the predictive partitioning of the coding block. The predictive partitioning of the coding block can be done by means of a partition mode (Part_mode) that indicates a partition type of the coding block. A size or shape of the prediction block can be determined according to the partition mode of the coding block. For example, a size of a prediction block determined according to the partition mode may be equal to or smaller than a size of an encoding block. [0237] [0238] FIG. 7 is a diagram illustrating a partition mode that can be applied to an encoding block when the coding block is coded by inter prediction. [0239] [0240] When a coding block is coded by inter prediction, one of the 8 partition modes may be applied to the coding block, as in the example shown in FIG. Four. [0241] [0242] When an encoding block is coded by intra prediction, a partition mode PART_2Nx2N or a partition mode PART_NxN can be applied to the coding block. [0243] [0244] PART_NxN can be applied when a block of coding has a minimum size. In this case, the minimum size of the coding block can be defined in an encoder and a decoder. Or, the information regarding the minimum size of the coding block can be signaled through a bit stream. For example, the minimum size of the coding block can be signaled through a fraction head, so that the minimum size of the coding block can be defined per fraction. [0245] [0246] In general, a prediction block can have a size from 64x64 to 4x4. However, when a block of coding is coded by inter prediction, it can be restricted that the prediction block does not have a 4x4 size to reduce the memory bandwidth when motion compensation is performed. [0247] [0248] FIG. 8 is a flow chart illustrating extraction processes of a residual sample according to an embodiment of the present invention. [0249] [0250] First, S810 can obtain a residual coefficient of a current block. The decoder can obtain the residual coefficient through a method of scanning coefficients. For example, the decoder can perform a scan of coefficients using a diagonal scan, zigzag scan, top-right scan, vertical scan, or horizontal scan and thereby obtain residual coefficients in the form of a two-dimensional block . [0251] [0252] Inverse quantization can be performed for the residual coefficient of the current block S820. [0253] [0254] It is possible to determine whether to skip an inverse transformation of the dequantized residual coefficient of the current block S830. Specifically, the decoder can determine whether to skip the inverse transformation on at least one of a horizontal direction or a vertical direction of the current block. When it is determined to apply the inverse transformation on at least one of the horizontal direction or the vertical direction of the current block, a residual sample of the current block can be obtained by inverse transformation of the dequantized residual coefficient of the current block S840. In this case, the inverse transformation can be performed using at least one of DCT, DST and KLT. [0255] [0256] When the inverse transformation is skipped in both the horizontal direction and the vertical direction of the current block, the inverse transformation in the horizontal direction and in the vertical direction of the current block. In this case, the residual sample of the current block can be obtained by scaling the dequantized residual coefficient with a predetermined value S850. [0257] [0258] Skipping the inverse transformation on the horizontal direction means that the inverse transformation is not done on the horizontal direction but the inverse transformation is done on the vertical direction. At this point, scaling can be done in the horizontal direction. [0259] [0260] To skip the inverse transformation on the vertical direction means that the inverse transformation is not done on the vertical direction but the inverse transformation is done on the horizontal direction. At this point, scaling can be done in the vertical direction. [0261] [0262] It can be determined whether or not a technique can be used to skip a reverse transformation for the current block depending on a partition type of the current block. For example, if the current block is generated through a binary tree-based partitioning, the jump scheme of the inverse transformation can be restricted for the current block. Consequently, when the current block is generated through the binary tree-based partitioning, a residual sample of the current block can be obtained by inverse transformation of the current block. Furthermore, when the current block is generated through binary tree-based partitioning, the encoding / decoding information that indicates whether or not the reverse transformation is skipped (eg, transform_skip_flag) can be omitted. [0263] [0264] Alternatively, when the current block is generated through a binary tree-based partitioning, it is possible to limit the jump pattern of the inverse transformation to at least one of the horizontal direction or the vertical direction. In this case, the direction in which the skip scheme of the inverse transformation is limited can be determined based on decoded information of the bitstream, or it can be determined adaptively based on at least one of a size of the current block, a form of the current block, or an intra prediction mode of the current block. [0265] [0266] For example, when the current block is a non-square block that has a width greater than a height, the jump scheme of the inverse transformation can to be allowed only in the vertical direction and to be restricted in the horizontal direction. That is, when the current block is 2NxN, the inverse transformation is performed in the horizontal direction of the current block, and the inverse transformation can be done selectively in the vertical direction. [0267] [0268] On the other hand, when the current block is a non-square block that has a height greater than a width, the jump scheme of the inverse transformation can be allowed only in the horizontal direction and restricted in the vertical direction. That is, when the current block is Nx2N, the inverse transformation is done in the vertical direction of the current block, and the inverse transformation can be done selectively in the horizontal direction. [0269] [0270] Unlike the previous example, when the current block is a non-square block that has a width greater than a height, the jump scheme of the inverse transformation can be allowed only in the horizontal direction, and when the current block is a non-square block that has a height greater than a width, the jump scheme of the inverse transformation can be allowed only in the vertical direction. [0271] [0272] The information that indicates whether or not to reverse the inverse transformation with respect to the horizontal direction or the information indicating whether to skip the inverse transformation with respect to the vertical direction can be signaled through a bit stream. For example, the information that indicates whether or not to reverse an inverse transformation in the horizontal direction is a 1-bit marker, "hor_transform_skip_flag", and the information that indicates whether to skip an inverse transformation in the vertical direction is a 1-bit marker, "Ver_transform_skip_flag" The encoder can encode at least one of "hor_transform_skip_flag" or "ver_transform_skip_flag" according to the shape of the current block. In addition, the decoder can determine whether or not to reverse the transformation in the horizontal direction or in the vertical direction by using at least one of "hor_transform_skip_flag" or "ver_transform_skip_flag". [0273] [0274] It can be established to skip the inverse transformation for any address of the current block depending on the type of partition of the current block. For example, if the current block is generated through a binary tree-based partitioning, the inverse transformation can be skipped in the horizontal direction or in the vertical direction. [0275] That is, if the current block is generated by binary tree-based partitioning, it can be determined that the inverse transformation for the current block is skipped over at least one of a horizontal direction or a vertical direction without coding / decoding information (e.g. , transform_skip_flag, hor_transform_skip_flag, ver_transform_skip_flag) that indicates whether the inverse transformation of the current block is skipped or not. [0276] [0277] If it is determined to apply the inverse transformation to the current block, a type of transformation can be determined and the inverse transformation can be performed using the determined transformation type. The type of transformation of the current block (for example, a transformation block or an encoding block) can be determined based on at least one of a size or an encoding mode of the current block. In this case, the coding mode can indicate whether a prediction block corresponding to the coding block or the transformation block is encoded in intra mode or inter mode. [0278] [0279] For example, the inverse transformation for a 4x4 block encoded in the intra mode can be done by the use of DST (specifically, DST-VII), and the inverse transformation for a block other than the block can be done by the use of DCT (specifically , DCT-II). [0280] [0281] The DST-VII can be defined as the matrix A4 of equation 1. The inverse transformation of DST-VII can be defined as A4T. [0282] [0283] [Equation 1] [0284] [0285] 29 55 74 84 [0286] [0287] A 4 74 74 0 -74 [0288] 84 -29 -74 55 [0289] 55 -84 74 -29 [0290] [0291] The DCT-II for a block of 8x8 can be defined as the matrix T8 of Equation 2. The inverse transformation of DCT-II can be defined as T8t. [0292] [Equation 2] [0293] [0294] [0295] [0296] [0297] A condition for selecting the type of transformation can be set differently in a unit of a sequence, a fraction or a block. For example, in fraction 0, DST is applied to a 4x4 transformation block encoded in intra mode, while in fraction 0, DST is applied to the 8x8 or smaller transformation block encoded in intra mode. [0298] [0299] As another example, the transformation type of the current block can be determined adaptively based on at least one of an intra prediction mode of the current block or the number of samples included in the current block. At this point, the number of samples used as a reference to select the type of transformation can have a fixed value or can be determined through the information signaled through the bitstream. The information can be signaled by means of a block level, a fraction header, or a set of parameters of the image. [0300] [0301] For example, only DST can be applied when the current block includes 16 or less of ours and when the current block is coded in intra mode, and DCT can be applied in the other cases. Specifically, DST can be applied to a 4x4, 2x8 or 8x2 block encoded by the intra prediction, and DCT can be applied to a block other than the block. [0302] [0303] Alternatively, the transformation type of the current block can be determined from the candidates of the transformation set included in a transformation set. At this point, different transformation sets can be used in a unit of a codification block or a transformation block. Alternatively, a plurality of transformation blocks included in a predetermined coding block may share the same transformation set. To determine a transformation set, the information in the Index can be signaled to identify the transformation set in a unit of a coding block or a transformation block. Alternatively, the transformation set of the current block can be determined adaptively according to a size, a shape, a coding mode, an intra prediction mode, the number of samples of the current block, or the like. [0304] [0305] The transformation set may include a plurality of candidates of transformation types that can be selectively used according to the shape, size or number of samples of the transformation block (or of the coding block). At this point, at least one of the number or types of candidates of the type of transformation included in the transformation sets may be different. [0306] [0307] Table 1 is a graph that represents transformation sets that include different candidates of type of transformation. [0308] [0309] [Table 1] [0310] [0311] [0312] [0313] [0314] In Table 1, it is illustrated that the number of candidates of type of transformation included in the transformation set is two. It is also possible that the transformation set includes one, three, four or more candidates of type of transformation. [0315] [0316] In addition, the number of transformation type candidates included in at least one of the transformation sets may be different from the number of transformation type candidates included in another transformation set. The number of candidates of maximum transformation type included in the transformation set it can be signaled in a fraction or a sequence header. [0317] [0318] The transformation type of the current block can be determined to be at least one of the candidates for the type of transformation included in the transformation set. At this point, the transformation type of the transformation block can be determined based on a size, a coding mode, an intra prediction mode, the number of samples of the transformation block or the coding block or the like. In this case, the intra prediction mode of the transformation block can be the intra prediction mode of the prediction block or of the coding block corresponding to the transformation block. [0319] [0320] For example, when Index 0 of the transformation set is determined as the transformation set of the current block, if the current block is a 4x4 block encoded in intra mode, candidate 0 of the transformation type is used, ie DST- VII, and if the current block does not satisfy the previous condition, candidate 1 of the transformation type, ie DCT-II, is used. [0321] [0322] Alternatively, when Index 2 of the transformation set is determined as the transformation set of the current block, if the current block is a block of 4x4 or of 8x8 coded in the intra mode, candidate 0 of the transformation type is applied, say, DST-VII and if the current block does not satisfy the previous condition, candidate 1 of the type of transformation is used, ie DCT-VIII. [0323] [0324] According to a size of the coding block, a condition for the selection of the candidate of the transformation type of the transformation block can be set differently. For example, when the size of the coding block is smaller than or equal to 32x32, the candidate of transformation type 0 is applied to a 4x4 transformation block encoded in intra mode, and the transformation type candidate is applied. 1a a transformation block that does not satisfy the above conditions. On the other hand, when the size of the coding block is greater than 32x32, the candidate of transformation type 0 is applied to a 4x4 block of 8x8 encoded in the intra mode, and the candidate of transformation type 1 is applied to a transformation block that does not satisfy the previous conditions. [0325] [0326] The transformation type candidate can include a transformation leap that indicates that no transformation is being made. Depending on whether the jump is allowed of the transformation, at least one of the types or the number of candidates of the type of transformation included in the transformation set can be established differently. As an example, if the transform_skip_enabled_flag that indicates whether or not to allow the transformation leap in an image is 1, a transform set can be used that also includes the leap of the transformation as the candidate of transformation type, as shown in Table 2. On the other hand, if the transform_skip_enabled_flag is 0, a transform set can be used that does not include the transformation leap as the candidate of transformation type, as shown in Table 1. [0327] [0328] [Table 2] [0329] [0330] [0331] [0332] [0333] The transformation types of a horizontal transformation and a vertical transformation of the current block can be the same, or the transformation types of the horizontal transformation and the vertical transformation can be different from each other. For example, a transformation type candidate in the transformation set can be applied to both horizontal and vertical transformation, or a candidate of different transformation type can be applied to each of the horizontal transformation and vertical type. [0334] [0335] As another example, the transformation sets for horizontal transformation and vertical transformation of the current block may be the same, or the transformation sets of horizontal transformation and vertical transformation may be different from each other. When different transformation sets are used for horizontal transformation and vertical transformation, an index of the transformation set can be individually flagged to identify the set of transformation for horizontal transformation and a transform set index to identify the transformation set for vertical transformation. [0336] [0337] For example, a transformation set corresponding to Index 0 can be used for horizontal transformation, and a transformation set corresponding to Index 1 can be used for vertical transformation. If the current block is 4x4 coded with intra prediction, vertical transformation and horizontal transformation can use candidate 1 of the type of transformation included in each transformation set. Accordingly, DST-II can be used for horizontal transformation and DST-I can be used for vertical transformation. [0338] [0339] It can be determined whether to use the same transformation set for horizontal transformation and vertical transformation depending on the intra prediction mode of the current block. For convenience of explanation, the transformation set for horizontal transformation is referred to as a horizontal direction transformation set, and the transformation set for vertical transformation is referred to as a vertical direction transformation set. [0340] [0341] For example, when the intra prediction mode of the current block is similar to a horizontal direction or similar to a vertical direction, horizontal transformation and vertical transformation can use different transformation sets. In this case, the intra prediction mode similar to the horizontal direction may include at least one of the vertical direction or the intra prediction modes in which a difference in the mode value from the intra prediction mode of the vertical direction is less than a predefined value. In addition, the intra prediction mode similar to the vertical direction may include at least one of the horizontal direction or the intra prediction modes in which a difference in the mode value from the intra prediction mode of the horizontal direction is less than a predefined value. On the other hand, when the intra prediction mode of the current block is a non-directional mode or a directional mode that does not satisfy the previous condition, vertical transformation and horizontal transformation can use the same transformation set. Alternatively, it is also possible to use different transformation sets for the vertical direction and the horizontal transformation of the current block when the intra prediction mode of the current block is in non-mode. directional. [0342] [0343] FIG. 9 is a diagram that illustrates, for 33 modes of intra prediction, whether a vertical transformation and a horizontal transformation use the same transformation set. In the example shown in FIG. 9, it is shown that vertical and horizontal transformations use different transformation sets when the intra prediction mode of the current block is included in a range of 7-13 or 23-29. On the other hand, it is represented that the same transformation set is applied for the vertical transformation and the horizontal transformation when the intra prediction mode of the current block is a directional mode not included in the previous interval. [0344] [0345] If there is a block that has the same intra prediction mode as the current block in a predetermined unit block, the transformation set of the current block can be set to be the same as the transformation set of the block that has the same intra mode. prediction than the current block. In this case, the predetermined unit block may be an encoding block, an encoding tree block, or a block having a predetermined size. [0346] [0347] For example, it will be assumed that an intra prediction mode corresponding to a first transformation block in a scan order in an encoding block has a vertical direction (eg, mode number 26), a horizontal direction transformation set of the block is of Index 2, and a set of vertical direction transformation of the block is of Index 0. If there are more transformation blocks that have an intra-prediction mode of vertical direction in the coding block (ie, a corresponding transformation block) to a prediction block having the vertical direction intra prediction mode), a value of the transformation set index for the newly scanned transformation block is not signaled. Instead, the transform set of the previously scanned transformation block having the vertical direction intra prediction mode is applied as a transformation set of the newly scanned transformation block. That is, a horizontal direction transformation set of the newly scanned transformation block is determined as Index 2. And a vertical direction transformation set is determined as the Index 0. [0348] As another example, when there is a block that has an intra prediction mode similar to the current block in a predetermined unit block, the transformation set of the current block can be set to be the same as the transform set of the block that has the mode of intra prediction similar to the current block. In this case, the intra prediction mode similar to the current block may include intra prediction modes within a predetermined range from a reference intra-prediction mode. For example, when the reference intra-prediction mode is a horizontal direction or a vertical direction, the intra-prediction mode of reference and the intra-prediction modes within ± a from the intra-prediction mode of the horizontal direction or the Vertical direction can be determined to be mutually similar. [0349] [0350] For example, it will be assumed that an intra prediction mode corresponding to a first transformation block in a scan order in a coding block has a vertical direction (eg, mode number 26), a block horizontal transformation transformation set is of Index 2, and a set of vertical direction transformation of the block is of Index 0. When there is a transformation block that has an intra prediction mode similar to the vertical direction (for example, mode number 27) in the block of encoding (ie, a transformation block corresponding to a prediction block having vertical intra-prediction mode), a value of the transformation set index for the newly scanned transformation block may not be signaled. Instead, a transform set of the transformation block having the intra prediction mode which is similar to the intra prediction mode of the current block can be applied as the transformation set, transformation set of the newly scanned transformation block. That is, a horizontal direction transformation set of the newly scanned transformation block is determined as Index 2, and a vertical direction transformation set can be determined as Index 0. [0351] [0352] At least one of a horizontal address transformation set or a vertical address transformation set of the current block can be determined based on an intra prediction mode of the current block. For example, Table 3 shows an example in which a Fixed Transform Set Index is assigned according to the intra prediction mode of the current block. [0353] [Table 3] [0354] [0355] [0356] [0357] [0358] When the current block is coded with inter prediction, a predefined transformation set can be used for the current block. For example, if the current block is coded with the inter prediction, a transformation set corresponding to the Index 0 can be used for the current block. [0359] [0360] Alternatively, when the coding block is encoded with the inter prediction, a transformation set is selected for the coding block, and the coding blocks included in the coding block can use transformation type candidates included in the transformation set. of the coding block. At this point, the type of transformation of each transformation block can be determined by a size or shape of the transformation block, or the information to identify the type of transformation selected for each transformation block can be signaled through the bitstream. [0361] [0362] The determination of at least one of a plurality of candidate groups of transformation type as the transformation type of the current block can be defined as AMT (Adaptive Multiple Transformation). Adaptive Multiplex Transformation (AMT) can be applied to a block of coding for a specific size or to a specific coding block of a specific form. At this time, the size information or the shape of the coding block to which the adaptive multiple transformation can be applied can be signaled through the bitstream. In this case, the information on the size of the coding block can indicate at least one of a maximum size or a minimum size. In addition, the information can be signaled through at least one of a block level, a fraction header, or a sequence header. [0363] Different transformations can be selectively used based on different size / shape in a unit of a fraction or a block. [0364] [0365] For example, in fraction 0, DST can be used when the transformation block is coded in the intra prediction mode and a transformation block size is 4x4, and DCT can be used in other cases. In fraction 1, DCT can be used when the transformation block is encoded in the intra prediction mode and the size of the transformation block is less than or equal to 8x8, and DCT can be used in other cases. [0366] [0367] Different transformations can be selected based on at least one of an intra prediction mode and the number of samples in the transformation block. Specifically, for example, when the transformation block is coded in the intra prediction mode and the number of samples in the transformation block is 16 or less, the transformation can be performed using DST, and DCT can be used in other blocks. [0368] [0369] Specifically, for example, when the transformation block is coded in the intra mode and the transformation block is 2x8 or when the transformation block is coded in the intra mode and the transformation block is 8x2, DST (transformation) is used. of discrete breast), and DCT-II (discrete cosine transformation) is used in other blocks. [0370] [0371] At this point, a syntax, a selection indicator of transformation block of different type can be signaled, indicating the number of samples in a block that is used as a reference to select different transformations, in a fraction header or in a set of parameters of the image. [0372] [0373] The conditions for selecting the candidate of transformation type 0 and the conditions for selecting the candidate of transformation type 1 may differ by one unit of a sequence, fraction or block. For example, in fraction 0, the transformation type candidate 0 is selected only for a 4x4 transformation block encoded in the intra mode, while in fraction 0, the transformation type 0 is selected for a transformation block 8x8 or smaller encoded in intra mode. [0374] Alternatively, the type of transformation can be selected adaptively based on at least one of an intra prediction mode or the number of samples in the block. At this time, the number of samples in the block used as a reference to select the type of transformation can have a fixed value or can be determined through the information signaled through the bitstream. The information can be signaled by means of a block level, fraction header, or a set of parameters of the image. [0375] [0376] For example, DST can be applied only when the current block comprises 16 or fewer samples and when the current block is coded in intra mode, and DCT can be applied in other cases. Specifically, DST can be applied to a 4x4, 2x8 or 8x2 transformation block encoded in the intra prediction, and DCT can be applied to other blocks. [0377] [0378] Although the embodiments described above have been described on the basis of a series of stages or flowcharts, they do not limit the order in time of the series of the invention, and can be performed simultaneously or in different orders as necessary. In addition, each of the components (e.g., units, modules, etc.) that constitute the block diagram in the embodiments described above can be implemented by a hardware or software device, and a plurality of components. Or a plurality of components may be combined and implemented by a single hardware or software device. The above described embodiments can be implemented in the form of program instructions that can be executed through various computer components and recorded on a computer readable recording medium. The computer readable recording medium can include one or a combination of program commands, data files, data structures and the like. Examples of computer readable media include magnetic media such as hard drives, flexible disks and magnetic tape, optical recording media such as CD-ROM and DVD, magneto-optical media such as flexible optical discs, media and hardware devices specifically configured for store and execute program instructions such as ROM, RAM, flash memory and the like. The hardware device can be configured to function as one or more software modules for the realization of the process according to the present invention, and vice versa. [0379] Industrial applicability [0380] The present invention can be applied to electronic devices that have the ability to encode / decode a video.
权利要求:
Claims (15) [1] 1. A method to decode a video, comprising the method: obtain a transformation coefficient of a current block; perform an inverse quantification of the transformation coefficient; determining a transformation set for the current block, the transformation set comprising a plurality of candidates of transformation type; determining one of the plurality of candidates of type of transformation as a type of transformation of the current block; Y perform an inverse transformation of the transformation coefficient inversely quantified based on the type of transformation determined. [2] The method according to claim 1, wherein the transformation set of the current block is determined based on a decoded Index information through a bit stream, and the Index information indicates at least one of a plurality of Transformation sets. [3] 3. The method according to claim 2, wherein at least one type or number of a candidate of transformation type for each of the plurality of transformation sets is different. [4] The method according to claim 1, wherein at least one type or number of a candidate of transformation type included in the transformation set is determined differently according to whether or not the transformation is allowed to be skipped. [5] The method according to claim 1, wherein the inverse transformation comprises a horizontal transformation and a vertical transformation and a transformation set for the horizontal transformation and a transformation set for vertical transformation are independently determined. [6] 6. The method according to claim 5, wherein the transformation set for the horizontal transformation and the transformation set for the vertical transformation are determined according to an intra prediction mode of the current block. [7] The method according to claim 1, wherein the transformation type of the current block is adaptively determined based on at least one of: a size, a shape or a number of samples of the current block. [8] 8. A method for coding a video, comprising the method: obtain a transformation coefficient of a current block; perform an inverse quantification of the transformation coefficient; determining a transformation set for the current block, the transformation set comprising a plurality of candidates of transformation type; determining one of the plurality of candidates of type of transformation as a type of transformation of the current block; Y perform an inverse transformation of the transformation coefficient inversely quantified based on the type of transformation determined. [9] The method according to claim 8, wherein the transformation set of the current block is determined based on information from the index indicating at least one of a plurality of transformation sets. [10] The method according to claim 9, wherein at least one type or number of a candidate of type of transformation for each of the plurality of transformation sets is different. [11] The method according to claim 8, wherein at least one type or number of a candidate of transformation type included in the transformation set is determined differently according to whether the transformation is allowed to skip or not. [12] The method according to claim 8, wherein the inverse transformation comprises a horizontal transformation and a vertical transformation and a transformation set for the horizontal transformation and a transformation set for the vertical transformation are independently determined. [13] 13. The method according to claim 12, wherein the transformation set for the horizontal transformation and the transformation set for the Vertical transformation are determined according to an intra prediction mode of the current block. [14] 14. The method according to claim 8, wherein the transformation type of the current block is determined adaptively based on at least one of: a size, a shape or a number of samples of the current block. [15] 15. An apparatus for decoding a video, the apparatus comprising: an entrope decoding unit for decoding a transformation coefficient of a current block; an inverse quantization unit for performing an inverse quantification of the transformation coefficient; Y a reverse transformation unit for determining a transformation set for the current block, the transformation set comprising a plurality of transformation type candidates, to determine one of the plurality of transformation type candidates as a transformation type of the current block , and to reverse the transformation of the conversion coefficient inversely quantified based on the type of transformation determined.
类似技术:
公开号 | 公开日 | 专利标题 ES2800509B1|2021-12-21|METHOD AND APPARATUS FOR PROCESSING VIDEO SIGNALS ES2739668B1|2021-12-03|METHOD AND APPARATUS FOR PROCESSING VIDEO SIGNALS ES2711474A2|2019-05-03|Method and device for processing video signal US20210337197A1|2021-10-28|Method and apparatus for processing video signal ES2737874B2|2020-10-16|METHOD AND APPARATUS FOR PROCESSING VIDEO SIGNAL US10904581B2|2021-01-26|Method and apparatus for processing video signal ES2736374B1|2021-03-05|Procedure and device to perform intraprediction during the encoding or decoding of a video ES2699691A2|2019-02-12|Video signal processing method and device ES2703607A2|2019-03-11|Video signal processing method and device ES2699723A2|2019-02-12|Video signal processing method and device ES2737843B2|2021-07-15|METHOD AND APPARATUS TO PROCESS A VIDEO SIGNAL ES2737845B2|2021-05-19|METHOD AND APPARATUS TO PROCESS VIDEO SIGNAL KR20180059367A|2018-06-04|Method and apparatus for processing a video signal ES2703458A2|2019-03-08|Video signal processing method and device ES2711230A2|2019-04-30|Method and apparatus for processing video signal US20210195189A1|2021-06-24|Method and apparatus for processing video signal ES2711223A2|2019-04-30|Method and device for processing video signal ES2711473A2|2019-05-03|Method and apparatus for processing video signal ES2711209A2|2019-04-30|Method and device for processing video signal
同族专利:
公开号 | 公开日 ES2711474R1|2020-07-01| CN109644281A|2019-04-16| WO2018044088A1|2018-03-08| CN113873242A|2021-12-31| US20210105475A1|2021-04-08| EP3509306A4|2020-05-13| US20190191163A1|2019-06-20| US20210105474A1|2021-04-08| KR20180025284A|2018-03-08| US20210105473A1|2021-04-08| CN109644281B|2021-10-29| US10764583B2|2020-09-01| CN113873243A|2021-12-31| EP3509306A1|2019-07-10| CN113873241A|2021-12-31| US20200351501A1|2020-11-05|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US8929455B2|2011-07-01|2015-01-06|Mitsubishi Electric Research Laboratories, Inc.|Method for selecting transform types from mapping table for prediction modes| KR101718954B1|2011-10-17|2017-03-22|주식회사 케이티|Method and apparatus for encoding/decoding image| KR102030719B1|2011-10-18|2019-10-10|주식회사 케이티|Method for encoding image, method for decoding image, image encoder, and image decoder| CN107277514B|2011-10-19|2019-08-16|株式会社Kt|The method of decoding video signal| US10390046B2|2011-11-07|2019-08-20|Qualcomm Incorporated|Coding significant coefficient information in transform skip mode| BR112014011123A2|2011-11-08|2017-05-16|Kt Corp|coefficient scan method and apparatus based on prediction unit partition mode| KR20130098122A|2012-02-27|2013-09-04|세종대학교산학협력단|Device and method for encoding/decoding| WO2013129822A1|2012-02-27|2013-09-06|세종대학교산학협력단|Image encoding and decoding apparatus, and image encoding and decoding method| WO2014007520A1|2012-07-02|2014-01-09|한국전자통신연구원|Method and apparatus for coding/decoding image| JP2015526013A|2012-07-02|2015-09-07|エレクトロニクス アンド テレコミュニケーションズ リサーチ インスチチュートElectronics And Telecommunications Research Institute|Video encoding method and apparatus, and video decoding method and apparatus| KR101529650B1|2013-07-02|2015-06-19|성균관대학교산학협력단|Selective transform method and apparatus, inverse transform method and apparatus for video coding| CN105684442B|2013-07-23|2020-02-21|英迪股份有限公司|Method for encoding/decoding image| WO2015012600A1|2013-07-23|2015-01-29|성균관대학교 산학협력단|Method and apparatus for encoding/decoding image| US10306229B2|2015-01-26|2019-05-28|Qualcomm Incorporated|Enhanced multiple transforms for prediction residual|US10750181B2|2017-05-11|2020-08-18|Mediatek Inc.|Method and apparatus of adaptive multiple transforms for video coding| WO2019194420A1|2018-04-01|2019-10-10|엘지전자 주식회사|Image coding method and device on basis of transform indicator| WO2019235887A1|2018-06-06|2019-12-12|엘지전자 주식회사|Method for performing transform index coding on basis of intra prediction mode, and device therefor| US10666981B2|2018-06-29|2020-05-26|Tencent America LLC|Method, apparatus and medium for decoding or encoding| US20210281842A1|2018-07-08|2021-09-09|Lg Electronics Inc.|Method and apparatus for processing video| US20210329281A1|2018-08-08|2021-10-21|Lg Electronics Inc.|Image encoding/decoding method and device therefor| WO2020050651A1|2018-09-05|2020-03-12|엘지전자 주식회사|Multiple transform selection-based image coding method and device therefor| BR112021005238A2|2018-09-20|2021-06-15|Nokia Technologies Oy|a method and apparatus for encoding and decoding digital image/video material| WO2020117011A1|2018-12-06|2020-06-11|엘지전자 주식회사|Method and device for processing video signal by using transform having low complexity| US11272198B2|2019-01-30|2022-03-08|Tencent America LLC|Method and apparatus for improved sub-block partitioning intra sub-partitions coding mode| CN113841399A|2019-06-21|2021-12-24|韩国电子通信研究院|Image encoding/decoding method and apparatus|
法律状态:
2019-05-03| BA2A| Patent application published|Ref document number: 2711474 Country of ref document: ES Kind code of ref document: A2 Effective date: 20190503 | 2020-07-01| EC2A| Search report published|Ref document number: 2711474 Country of ref document: ES Kind code of ref document: R1 Effective date: 20200624 |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 KR20160112127|2016-08-31| PCT/KR2017/009526|WO2018044088A1|2016-08-31|2017-08-31|Method and device for processing video signal| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|