![]() Method and device for processing video signal
专利摘要:
A method for decoding an image according to the present invention may comprise: a step of inducing a reference sample of a current block; a step of determining an intra-prediction mode of the current block; and a step of obtaining a prediction sample of the current block by using the reference sample and the intra-prediction mode. 公开号:ES2711223A2 申请号:ES201990015 申请日:2017-08-31 公开日:2019-04-30 发明作者:Bae Keun Lee 申请人:KT Corp; IPC主号:
专利说明:
[0001] [0002] [0003] [0004] Technical field [0005] The present invention relates to a method and apparatus for processing video signals. [0006] Background of the technique [0007] Recently, the demand for high resolution and high quality images such as high definition (HD) images and ultra high definition (UHD) images has increased in several fields of application. However, image data of higher resolution and quality have increasing amounts of data compared to conventional image data. Therefore, when transmitting image data using a medium such as conventional and wireless broadband networks, or when image data is stored using a conventional storage medium, transmission and storage costs increase. To solve these problems that occur with an increase in the resolution and quality of the image data, high efficiency image coding / decoding techniques can be used. [0008] The technology of compression of images includes several techniques, among them: a technique of prediction of a pixel value included in a current image from a previous or later image of the current image; an intra-prediction technique for predicting a pixel value included in a current image using pixel information in the current image; an entropy coding technique of assigning a short code to a value with a high occurrence frequency and assigning a long code to a value with a low occurrence frequency; etc. The image data can be effectively compressed by using said image compression technology, and can be transmitted or stored. [0009] Meanwhile, with the demand for high resolution images, the demand for stereo image content, which is a new image service, has also increased. A video compression technique is being discussed to effectively provide stereographic image content with high resolution and ultra high resolution. [0010] Divulgation [0011] Technical problem [0012] An object of the present invention is to provide a method and apparatus for efficiently performing the intra-prediction for an objective coding / decoding block in the encoding / decoding of a video signal. [0013] An object of the present invention is to provide a method and an apparatus for deriving a new reference sample in plan mode based on reference samples derived previously in the encoding / decoding of a video signal. [0014] An object of the present invention is to provide a method and an apparatus for deriving a prediction sample based on a weighted sum of a plurality of reference samples in the encoding / decoding of a video signal. [0015] The technical objectives to be achieved by the present invention are not limited to the technical problems mentioned above. And other technical problems that are not mentioned will be understood clearly for experts in the field from the following description. [0016] Technical solution [0017] A method and apparatus for decoding a video signal according to the present invention can derive a reference sample for a current block, determine a prediction mode intra of the current block and obtain a prediction sample for the current block using the sample of reference and intra prediction mode. At this time, if the intra-prediction mode is the flat mode, the prediction sample is generated based on a first prediction sample generated using at least one of the reference samples located in the same horizontal line as an objective sample of prediction. and a second prediction sample generated using at least one of the reference samples located in the same vertical line as the objective sample of prediction. [0018] A method and apparatus for encoding a video signal according to the present invention can derive a reference sample for a current block, determine an intra prediction mode of the current block and obtain a prediction sample for the current block using the sample of reference and intra prediction mode. At this time, if the intraprediction mode is the flat mode, the prediction sample is generated based on a first prediction sample generated using at least one of the reference samples located in a same horizontal line as an objective sample of prediction and a second sample of prediction generated using at least one of the reference samples located in the same vertical line as the objective sample of prediction. [0019] In the method and apparatus for encoding / decoding a video signal according to the present invention, the reference sample may comprise a superior reference sample and a left reference sample adjacent to the current block, the first prediction sample being generated. using at least one right reference sample derived on the basis of the left reference sample and the upper reference sample and the second prediction sample can be generated using at least one lower reference sample derived from the upper reference sample and the sample from left reference. [0020] In the method and apparatus for encoding / decoding a video signal according to the present invention, a position of the upper reference sample that is used to derive the correct reference sample or a position of the left reference sample that is Use to derive the lower reference sample can be determined adaptively depending on the size or shape of the current block. [0021] In the method and apparatus for encoding / decoding a video signal according to the present invention, the first prediction sample can be generated based on a weighted sum between the left reference sample and the right reference sample, and the second sample The prediction can be generated based on a weighted sum between the upper reference sample and the lower reference sample. [0022] In the method and apparatus for encoding / decoding a video signal according to the present invention, a series of reference samples used to derive the first prediction sample or the second prediction sample can be determined differently depending on the position of the shows objective of prediction. [0023] In the method and apparatus for encoding / decoding a video signal according to the present invention, the prediction sample can be generated based on a weighted sum of the first prediction sample and the second prediction sample. [0024] In the method and apparatus for encoding / decoding a video signal according to the present invention, the weights for the first prediction sample and the second prediction sample can be determined differently according to a current block shape. [0025] The features briefly summarized above for the present invention are only illustrative aspects of the detailed description of the following invention, but do not limit the scope of the invention. [0026] Advantageous effects [0027] According to the present invention, an intra-prediction can be effected efficiently for a target coding / decoding block. [0028] According to the present invention, the prediction efficiency can be improved by deriving a new reference sample in plan mode based on previously derived reference samples. [0029] According to the present invention, the prediction efficiency can be improved by deriving a prediction sample based on a weighted sum of a plurality of reference samples. [0030] The effects obtainable by the present invention are not limited to the effects mentioned above, and other effects not mentioned can be clearly understood by those skilled in the art from the following description. [0031] Description of the drawings [0032] Figure 1 is a block diagram illustrating a device for encoding a video according to an embodiment of the present invention. [0033] Figure 2 is a block diagram illustrating a device for decoding a video according to an embodiment of the present invention. [0034] Figure 3 is a diagram illustrating an example of a hierarchical partition of a codification block based on a tree structure according to an embodiment of the present invention. [0035] Figure 4 is a diagram illustrating a type of partition in which binary tree-based partitioning is allowed according to an embodiment of the present invention. [0036] Fig. 5 is a diagram illustrating an example in which only a binary tree-based partition of a predetermined type according to an embodiment of the present invention is allowed. [0037] Figure 6 is a diagram for explaining an example in which the information related to the allowed number of binary tree partitions is encoded / decoded, according to an embodiment to which the present invention is applied. [0038] Figure 7 is a diagram illustrating a partition mode applicable to a coding block according to an embodiment of the present invention. [0039] Figure 8 is a diagram illustrating types of predefined intraprediction modes for a device for encoding / decoding a video according to a realization of the present invention. [0040] Fig. 9 is a diagram illustrating a type of extended intraprediction modes according to an embodiment of the present invention. [0041] Fig. 10 is a flow chart briefly illustrating an intraprediction method according to an embodiment of the present invention. [0042] Figure 11 is a diagram illustrating a correction method of a prediction sample of a current block based on differential information of neighboring samples according to an embodiment of the present invention. [0043] Figures 12 and 13 are diagrams illustrating a method for correcting a prediction sample based on a predetermined correction filter according to an embodiment of the present invention. [0044] Figure 14 shows a range of reference samples for intraprediction according to an embodiment to which the present invention is applied. [0045] Figures 15 to 17 illustrate an example of filtering in reference samples according to an embodiment of the present invention. [0046] Figure 18 is a diagram showing an example of deriving a right reference sample or a lower reference sample using a plurality of reference samples. [0047] Figures 19 and 20 are diagrams for explaining a determination of a right reference sample and a lower reference sample for a non-square block, according to an embodiment of the present invention. [0048] Figure 21 is a flow diagram illustrating processes for obtaining a residual sample according to an embodiment to which the present invention is applied. [0049] MODE OF CARRYING OUT THE INVENTION [0050] A variety of modifications can be made to the present invention and there are several embodiments of the present invention, examples of which will now be provided with reference to the drawings and will be described in detail. However, the present invention is not limited to this, and exemplary embodiments may be construed as including all modifications, equivalents or substitutes in a technical concept and a technical scope of the present invention. Similar reference numbers refer to the similar element, described in the drawings. [0051] The terms used in the memory, "first", "second", etc. you can use to describe several components, but the components should not be considered as limited to the terms. The terms are only used to distinguish a component from another component. For example, the "first" component may be referred to as the "second" component without departing from the scope of the present invention, and the "second" component may similarly be referred to as the "first" component. The term 'and / or' includes a fusion of a plurality of elements or any of a plurality of terms. [0052] It will be understood that when simply referring to an element as 'connected to' or 'coupled to' another element without being 'directly connected to' or 'directly coupled to' another element in the present description, it can be 'directly connected to' or 'directly coupled to' another element or being connected or coupled to another element, having the other element intervening therebetween. In contrast, it should be understood that when an element is referred to as "directly coupled" or "directly connected" to another element, there are no intervening elements present. [0053] The terms used in the present specification are simply used to describe particular embodiments, and are not intended to limit the present invention. An expression used in the singular covers the expression of the plural, unless it has a clearly different meaning in the context. In the present specification, it should be understood that terms such as "including", "having", etc. they intend to indicate the existence of the characteristics, numbers, stages, actions, elements, parts or combinations of them described in the report, and do not intend to exclude the possibility that one or more characteristics, numbers, stages, actions, may exist or be added, elements, parts or combinations thereof. [0054] Next, they will describe preferred embodiments of the present invention in detail with reference to the accompanying drawings. In the following, the same constituent elements in the drawings are indicated with the same reference numbers, and a repeated description of the same elements will be omitted. [0055] Figure 1 is a block diagram illustrating a device for encoding a video according to an embodiment of the present invention. [0056] Referring to Figure 1, the device 100 for encoding a video may include: an image partition module 110, prediction modules 120 and 125, a transformation module 130, a quantization module 135, a reorganization module 160, a coding module of entropy 165, a module of reverse quantization 140, a reverse transformation module 145, a filter module 150 and a memory 155. [0057] The constituent parts shown in Figure 1 are shown independently to represent different characteristic functions among s ^ in the device for encoding a video. Therefore, it does not mean that each constituent part is constituted in a separate hardware or software constituent unit. In other words, each constituent part includes each of the constituent parts listed for convenience. Therefore, at least two constituent parts of each constituent part can be combined to form a constituent part or a constituent part can be divided into a plurality of constituent parts to perform each function. The embodiment in which each constituent part is combined and the realization in which a constituent part is divided are also included in the scope of the present invention, if they do not depart from the essence of the present invention. [0058] Furthermore, some of the constituents may not be indispensable constituents that perform essential functions of the present invention, but may be selective constituents that improve only the performance thereof. The present invention can be implemented by including only the essential constituent parts to implement the essence of the present invention, except the constituents used to improve performance. The structure that includes only the indispensable constituents, except the selective constituents used to improve only the performance, is also included in the scope of the present invention. [0059] The image partition module 110 can divide an input image into one or more processing units. Here, the processing unit may be a prediction unit (PU), a transformation unit (TU) or a coding unit (CU). The image partitioning module 110 can divide an image into combinations of multiple coding units, prediction units and transformation units, and can encode an image by selecting a fusion of coding units, prediction units and transformation units with a criterion default (for example, depending on the cost). [0060] For example, an image can be divided into several coding units. A recursive tree structure, such as a quadruple tree structure, can be used to divide an image into coding units. A coding unit that is divided into other coding units with an image or a larger coding unit such as a root can be partitioned with secondary nodes corresponding to the number of partitioned coding units. [0061] A coding unit that is no longer partitioned by a predetermined limitation serves as a leaf node. It is dedr, when it is assumed that only one square partition is possible for a coding unit, one coding unit can be divided at most into four other coding units. [0062] Hereinafter, in the realization of the present invention, the coding unit can mean a unit that performs the coding, or a unit that performs decoding. [0063] A prediction unit can be one of the partitions divided into a square or rectangular shape that has the same size in a single coding unit, or a prediction unit can be one of the partitioned partitions to have a different shape / size in one single coding unit. [0064] When a prediction unit subjected to intra-prediction is generated based on a coding unit and the coding unit is not the smallest coding unit, the intra-prediction can be performed without dividing the coding unit into multiple prediction units NxN. [0065] The prediction modules 120 and 125 may include an interpredication module 120 performing the interprediction and an intrapredication module 125 which performs the intraprediction. It can be determined whether to perform the interprediction or the intra-prediction for the prediction unit, and the detailed information (for example, an intra-prediction mode, a motion vector, a reference image, etc.) according to each prediction procedure. Here, the processing unit subject to prediction may be different from the processing unit for which the prediction procedure and detailed content is determined. For example, the prediction procedure, the prediction mode, etc. they can be determined by the prediction unit, and the transformation unit can perform the prediction. A residual value (residual block) can be entered between the generated prediction block and an original block to the transformation module 130. In addition, the information of the prediction mode, the information of the motion vector, etc. used for the prediction may be encoded with the residual value by the entropy coding module 165 and may be transmitted to a device for decoding a video. When a particular coding mode is used, it is possible to transmit to a device to decode video by encoding the original block as it is without generating the prediction block through the prediction modules 120 and 125. [0066] The interpredication module 120 can predict the prediction unit based on the information of at least one of a previous image or a subsequent image of the current image, or it can predict the unit of prediction based on the information of some regions encoded in the current image, in some cases. The interpredication module 120 may include an interpolation module of reference images, a motion prediction module and a movement compensation module. [0067] The reference image interpolation module may receive reference image information from the memory 155 and may generate pixel information of a whole pixel or less than the entire pixel of the reference image. In the case of light pixels, an interpolation filter based on 8-lead DCT with different filter coefficients can be used to generate pixel information of a whole pixel or less than an integer pixel in units of 1/4 pixel. In the case of chromatic signals, an interpolation filter based on 4-lead DCT can be used that has a different filter coefficient to generate pixel information of one whole pixel or less than one whole pixel in units of 1/8 pixel . [0068] The motion prediction module can perform the prediction of movement based on the reference image interpolated by the reference image interpolation module. As procedures for calculating a motion vector, various methods can be used, such as a block matching algorithm based on full search (FBMA), a three-stage search (TSS), a new three-stage search algorithm (NTS) ), etc. The motion vector can have a motion vector value in units of 1/2 pixel or 1/4 pixel based on an interpolated pixel. The motion prediction module can predict a current prediction unit by changing the motion prediction procedure. Various methods can be used as motion prediction methods, such as the omission procedure, the fusion procedure, the AMVP (Advanced Motion Vector Prediction) procedure, the intrablock copy procedure, etc. [0069] The intra-prediction module 125 can generate a prediction unit based on reference pixel information adjacent to a current block which is pixel information in the current image. When the neighboring block of the current prediction unit is a block subjected to interprediction and, therefore, a reference pixel is a pixel subjected to interprediction, the reference pixel included in the block under interprediction can be replaced by information of pixels reference of a neighboring block subject to intraprediction. That is, when a reference pixel is not available, at least one reference pixel of available reference pixels can be used instead of reference pixel information not available. [0070] Prediction modes in intraprediction can include a directional prediction mode that uses reference pixel information depending on the direction of the prediction and a non-directional prediction mode that does not use directional information to make the prediction. A mode for predicting the luminance information may be different from a mode for predicting the chrominance information, and for predicting the chrominance information the information of the prediction mode used to predict the luminance information or the signal information may be used. predicted luminance. [0071] When performing the intraprediction, when the size of the prediction unit is the same as the size of the transformation unit, the intra prediction can be performed in the prediction unit based on the pixels located on the left, upper left and the top of the prediction unit. However, when performing the intra-prediction, when the size of the prediction unit is different from the size of the transformation unit, the intra-prediction can be performed using a reference pixel based on the transformation unit. In addition, intra-prediction using an NxN partition can be used only for the smallest coding unit. [0072] In the intra-prediction procedure, a prediction block can be generated after applying an AIS filter (intra-adaptive smoothing) to a reference pixel as a function of the prediction modes. The type of AIS filter applied to the reference pixel may vary. To perform the intra-prediction procedure, an intra-prediction mode of the current prediction unit can be predicted from the intra-prediction mode of the prediction unit adjacent to the current prediction unit. In predicting the prediction mode of the current prediction unit by using predicted mode information from the neighboring prediction unit, when the intra prediction mode of the current prediction unit is the same as the unit's intra-prediction mode In the case of neighboring prediction, the information indicates that the prediction modes of the current prediction unit and the neighboring prediction unit are equal to each other can be transmitted using predetermined signaling information. When the prediction mode of the current prediction unit is different from the predictive mode of the unit of prediction neighboring prediction, entrop ^ a coding can be performed to encode the information of the prediction mode of the current block. [0073] In addition, a residual block that includes information about a residual value that is different between the prediction unit under prediction and the original block of the prediction unit can be generated based on the prediction units generated by the prediction modules 120 and 125. The generated residual block can be introduced into the transformation module 130. [0074] The transform module 130 can transform the residual block including the residual value information between the original block and the prediction unit generated by the prediction modules 120 and 125 by the use of a transformation procedure, such as discrete transform of cosine (DCT), discrete sine transform (DST), and KLT. The application of DCT, DST or KLT to transform the residual block can be determined as a function of the information of the intra-prediction mode of the prediction unit used to generate the residual block. [0075] The quantization module 135 can quantize the transformed values in a frequency domain by the transformation module 130. The quantization coefficients may vary according to the block or the importance of an image. The values calculated by the quantization module 135 can be provided to the inverse quantization module 140 and the reorganization module 160. [0076] The reorganization module 160 can rearrange the coefficients of the quantized residual values. [0077] The reorganization module 160 can change a coefficient in the form of a two-dimensional block into a coefficient in the form of a one-dimensional vector through a coefficient scanning procedure. For example, the reorganization module 160 may scan from a DC coefficient to a coefficient in a high frequency domain using a zigzag scanning procedure to change the coefficients to be in the form of one-dimensional vectors. According to the size of the unit of transformation and the mode of intraprediction, the exploration in vertical direction where the coefficients are explored in the form of two-dimensional blocks in the direction of the column or the exploration in horizontal direction where the coefficients in block form are explored two-dimensional, you can use the direction of the row instead of the zigzag scan. That is to say, the procedure of exploration between zigzag, exploration in vertical direction and exploration in horizontal direction can be determined according to the size of the unit of transformation and the mode of intraprediction. [0078] The entrop ^ a coding module 165 can perform the entrop ^ coding based on the values calculated by the reorganization module 160. The entropy coding can use several coding procedures, for example, Golomb exponential coding, coding of variable length adapted to the context (CAVLC) and coding of binary arithmetic adapted to the context (CABAC). [0079] The entropy coding module 165 may encode a variety of information, such as residual value coefficient information and block type information of the coding unit, prediction mode information, partition unit information, unit information prediction, transformation unit information, motion vector information, reference frame information, block interpolation information, filtering information, etc. of the reorganization module 160 and the prediction modules 120 and 125. [0080] The entropy coding module 165 may encode by entropy the input coefficients of the coding unit of the reorganization module 160. [0081] The inverse quantization module 140 can quantize inversely the values quantized by the quantization module 135 and the inverse transformation module 145 can transform the transformed values inversely by the transformation module 130. The residual value generated by the inverse quantization module 140 and the inverse transformation module 145 can be combined with the prediction unit predicted by a motion estimation module, a motion compensation module and the intra prediction module of the prediction modules 120 and 125, so that a block can be generated rebuilt. [0082] The filter module 150 may include at least one of an unlock filter, a displacement correction unit and an adaptive loop filter (ALF). [0083] The unblocking filter can eliminate the distortion of the block that occurs due to the limits between the blocks in the reconstructed image. To determine if unblocking should be performed, the pixels included in several rows or columns in the block can be a basis for determining if the unblocking filter is applied to the current block. When the unblocking filter is applied to the block, a strong filter or a weak filter can be applied depending on the required unlocking filtering force. In addition, when applying the unblocking filter, filtering in horizontal direction and filtering in vertical direction can be processed in parallel. [0084] The displacement correction module can correct the displacement with the original image in units of a pixel in the image subject to unlocking. To perform offset correction on a particular image, it is possible to use a procedure to apply the offset in consideration to the edge information of each pixel or a pixel partition procedure of an image in the predetermined number of regions, determining a region that is subject to carry out a displacement, and apply the displacement to the determined region. [0085] Adaptive loop filtering (ALF) can be performed according to the value obtained when comparing the filtered reconstructed image and the original image. The pixels included in the image can be divided into predetermined groups, a filter can be determined that will be applied to each of the groups and an individual filtering can be performed for each group. The information on whether to apply ALF and a luminance signal can be transmitted by codification units (CU). The shape and filter coefficient of a filter for ALF can vary depending on each block. In addition, the filter for ALF in the same form (fixed form) can be applied independently of the characteristics of the destination block of the application. [0086] The memory 155 may store the reconstructed block or the calculated image through the filter module 150. The stored reconstructed block or image may be provided to the prediction modules 120 and 125 to perform the interprediction. [0087] Figure 2 is a block diagram illustrating a device for decoding a video according to an embodiment of the present invention. [0088] Referring to Figure 2, the device 200 for decoding a video may include: an entropy decoding module 210, a reorganization module 215, a reverse quantization module 220, a reverse transformation module 225, prediction modules 230 and 235, a filter module 240 and a memory 245. [0089] When a bit stream of video is input from the device to encode a video, the input bitstream can be decoded according to a reverse process of the device to encode a video. [0090] The entropy decoding module 210 may perform the entropy decoding according to an inverse entropy coding process of the entropy coding module of the device for encoding a video. For example, according to the procedures performed by the device for encoding a video, various procedures may be applied, such as Golomb's exponential coding, Variable length coding adapted to the context (CAVLC) and binary arithmetic coding adapted to the context (CABAC). [0091] The entropy decoding module 210 can decode information about the intraprediction and interprediction performed by the device to encode a video. [0092] The reorganization module 215 can perform a reorganization on the entropy of the bit stream decoded by the entropy decoding module 210 based on the reorganization procedure used in the device to encode a video. The reorganization module can reconstruct and reorganize the coefficients in the form of one-dimensional vectors to the coefficient in the form of two-dimensional blocks. The reorganization module 215 can receive information related to the exploration of coefficients performed in the device to encode a video and can perform a reorganization through a process of inverse exploration of the coefficients according to the order of scanning performed in the device to encode a video. [0093] The inverse quantization module 220 can perform an inverse quantization based on a quantization parameter received from the device to encode a video and the reorganized coefficients of the block. [0094] The reverse transformation module 225 can perform the inverse transformation, that is, inverse DCT, inverse DST and inverse KLT, which is the inverse process of transformation, ie, DCT, DST and KLT, performed by the transformation module in the result of quantification by the device to encode a video. The inverse transformation can be performed as a function of a transfer unit determined by the device to encode a video. The reverse transformation module 225 of the device for decoding a video can selectively perform transformation schemes (eg, DCT, DST and KLT) depending on multiple data, such as the prediction procedure, the size of the current block, the prediction direction, etc. [0095] The prediction modules 230 and 235 can generate a prediction block based on information about the prediction block generation received from the entropy decoding module 210 and the previously decoded image or block information received from the memory 245. [0096] As described above, just like the operation of the device to encode a video, when performing the intra-prediction, when the size of the prediction unit is the same as the size of the transformation unit, the Intra prediction can be carried out in the prediction unit as a function of the pixels located on the left, top left and top of the prediction unit. When performing the intra-prediction, when the size of the prediction unit is different from the size of the transformation unit, the intra-prediction can be performed using a reference pixel based on the transformation unit. In addition, intra-prediction using an NxN partition can be used only for the smallest coding unit. [0097] The prediction modules 230 and 235 may include a prediction unit determination module, an interpredication module and an intraprediction module. The determination module of the prediction unit may receive a variety of information, such as information from the prediction unit, information about the prediction mode of an intraprediction procedure, information about the prediction of movement of an interprediction procedure, etc. From the entropy decoding module 210, it can divide a current coding unit into prediction units, and can determine if interprediction or intraprediction is performed in the prediction unit. By using the information required in the interprediction of the current prediction unit received from the device for encoding a video, the interpredication module 230 can perform the interprediction in the current prediction unit according to the information of at least one previous image or a subsequent image of the current image, including the current prediction unit. Alternatively, interprediction can be performed based on the information of some prereconstructed regions in the current image, including the current prediction unit. [0098] To perform the interpredication, it can be determined for the coding unit which of a hop mode, a merge mode, an AMVP mode and a block interpolation mode is used as the prediction procedure of movement of the prediction unit included in the coding unit. [0099] The intrapredication module 235 can generate a prediction block based on pixel information in the current image. When the prediction unit is a prediction unit subject to intra-prediction, the intra-prediction can be performed based on the information of the prediction mode of the prediction unit received from the device to encode a video. The intrapredication module 235 may include an intra-adaptive smoothing filter (AIS), an interpolation module of reference pixels and a DC filter. The AIS filter performs filtering on the reference pixel of the current block, and the filter application can Determine according to the prediction mode of the current prediction unit. The AIS filtering can be performed on the reference pixel of the current block using the prediction mode of the prediction unit and the information of the AIS filter received from the device to encode a video. When the prediction mode of the current block is a mode in which AIS filtering is not performed, the AIS filter may not be applied. [0100] When the prediction mode of the prediction unit is a prediction mode in which the intra-prediction is performed as a function of the pixel value obtained by interpolating the reference pixel, the interpolation module of reference pixels can interpolate the reference pixel to generate the reference pixel of a whole pixel or less than a whole pixel. When the prediction mode of the current prediction unit is a prediction mode in which a prediction block is generated without interpolation of the reference pixel, the reference pixel can not be interpolated. The DC filter can generate a prediction block through filtering when the prediction mode of the current block is a CC mode. [0101] The reconstructed block or image can be provided to the filter module 240. The filter module 240 can include the deblocking filter, the displacement correction module and the ALF. [0102] Information on whether or not the unlock filter applies to the corresponding block or image, and information on which strong and weak filters are applied when the unlock filter is applied can be received from the device to encode a video. The unlock filter of the device for decoding a video can receive information about the unlock filter of the device for encoding a video, and can perform a unlock filtering in the corresponding block. [0103] The offset correction module may perform the offset correction on the reconstructed image based on the type of offset correction and the offset value information applied to an image at the time of coding. [0104] The ALF can be applied to the coding unit based on information on whether to apply the ALF, the information of the ALF coefficient, etc., received from the device to encode a video. The ALF information can be provided as included in a particular set of parameters. [0105] The memory 245 can store the reconstructed image or block for use as a reference image or block and can provide the reconstructed image to an output module. [0106] As described above, in the realization of the present invention, for convenience of explanation, the coding unit is used as a term that represents a unit for coding, but the coding unit can serve as a unit that performs decoding, as well as coding. [0107] In addition, a current block can represent an objective block to be encoded / decoded. And, the current block can represent a coding tree block (or a coding tree unit), a coding block (or a coding unit), a transformation block (or a transformation unit), a block of prediction (or a prediction unit), or similar, depending on a coding / decoding stage. [0108] An image can be encoded / decoded by dividing into base blocks that have a square shape or a non-square shape. At this time, the base block can be referred to as a coding tree unit. The coding tree unit can be defined as a coding unit of the largest size allowed within a sequence or segment. The information on whether the encoder tree unit has a square shape or if it has a non-square shape or the information on the size of the encoder tree unit can be signaled through a set of sequence parameters, a set of image parameters or a segment header. The unit of the coding tree can be divided into smaller partitions. At this time, if it is assumed that the depth of a partition generated by dividing the coding tree unit is 1, the depth of a partition generated by dividing the partition that has depth 1 can be defined as 2. That is, a Partition generated by dividing a partition that has a depth k in the coding tree unit can be defined as having a depth of k + 1. [0109] An arbitrary size partition generated by dividing an encoding tree unit can be defined as a coding unit. The coding unit can be recursively divided or divided into base units to perform prediction, quantization, transformation or loop filtering, and the like. For example, a partition of arbitrary size generated by dividing the coding unit can be defined as a unit of coding, or it can be defined as a unit of transformation or a unit of prediction, which is a base unit for making the prediction, quantification , transformation or filtering in loop and the like. [0110] The partition of a coding tree unit or a coding unit can be made based on at least one of a vertical line and a line horizontal. In addition, the number of vertical lines or horizontal lines that divide the coding tree unit or the coding unit may be at least one or more. For example, the unit of the coding tree or the coding unit can be divided into two partitions using a vertical line or a horizontal line, or the unit of the coding tree or the coding unit can be divided into three partitions using two lines. vertical or two horizontal lines. Alternatively, the unit of the coding tree or the coding unit can be divided into four partitions having a length and a width of 1/2 using a vertical line and a horizontal line. [0111] When a coding tree unit or a coding unit is divided into a plurality of partitions using at least one vertical line or at least one horizontal line, the partitions may have a uniform size or a different size. Alternatively, any partition can have a different size of the remaining partitions. [0112] In the embodiments described below, it is assumed that a coding tree unit or a coding unit is divided into a quadruple tree structure or a binary tree structure. However, it is also possible to divide a coding tree unit or a coding unit using a larger number of vertical lines or a greater number of horizontal lines. [0113] Figure 3 is a diagram illustrating an example of a hierarchical partition of an encoding block based on a tree structure according to a realization of the present invention. [0114] An input video signal is decoded in predetermined block units. Such a predetermined unit for decoding the video input signal is a coding block. The coding block can be a unit that performs prediction, transformation and intra / inter quantification. In addition, a prediction mode (e.g., intraprediction mode or interprediction mode) is determined in units of an encoding block, and the prediction blocks included in the encoding block can share the determined prediction mode. The coding block can be a square or non-square block having an arbitrary size in a range of 8x8 to 64x64, or it can be a square or non-square block having a size of 128x128, 256x256 or more. [0115] Specifically, the coding block can be divided hierarchically as a function of at least one of a quadruple tree and a binary tree. Aqti, the quadruple tree-based partition can mean that a 2Nx2N coding block is it divides into four blocks of NxN coding, and the binary tree-based partition can mean that one block of coding is divided into two blocks of coding. Even if binary tree-based partitioning is performed, a block of square-shaped coding can exist in the lower depth. [0116] The partition based on binary trees can be performed symmetrically or asymmetrically. The block of coding divided according to the binary tree can be a square block or a non-square block, such as a rectangular shape. For example, a partition type in which binary tree-based partitioning is allowed may comprise at least one of a symmetric type of 2NxN (horizontal directional non-square coding unit) or Nx2N (non-square encoding unit of vertical direction) , asymmetric type of nLx2N, nRx2N, 2NxnU or 2NxnD. [0117] Partition based on binary tree can be allowed in a limited way to a partition of symmetric or asymmetric type. In this case, the construction of the unit of the coding tree with square blocks may correspond to the partition CU of quadruple tree, and the construction of the unit of the coding tree with non-square symmetrical blocks may correspond to the division of binary tree. The construction of the coding tree unit with square blocks and symmetrical non-square blocks may correspond to the CU partition of quadruple and binary tree. [0118] Partition based on binary tree can be done in a coding block where the quadruple tree-based partition is no longer performed. The quadruple tree-based partition can no longer be performed in the partitioned coding block based on the binary tree. [0119] In addition, the partition of a lower depth can be determined according to the type of partition of a higher depth. For example, if partition based on binary tree is allowed in two or more depths, only the same type as the binary tree partition of the upper depth in the lower depth can be allowed. For example, if the partition based on the binary tree in the upper depth is done with the type 2NxN, the partition based on the binary tree in the lower depth is also done with the type 2NxN. Alternatively, if the binary tree-based partition in the upper depth is performed with the Nx2N type, the binary tree-based partition in the lower depth is also done with the Nx2N type. [0120] On the contrary, it is also possible to allow, at a lower depth, only a different type of a binary tree partition of a higher depth. [0121] It may be possible to limit only one specific type of tree-based partition binary to be used for sequence, segment, coding tree unit or coding unit. As an example, only the 2NxN type or the Nx2N type of binary tree-based partition for the encoding tree unit can be allowed. An available partition type can be predefined in an encoder or a decoder. Or the information about the type of partition available or about the type of partition not available can be encoded and then sent through a bit stream. [0122] Figure 5 is a diagram illustrating an example in which only one specific type of partition based on binary tree is allowed. Figure 5A shows an example in which only the Nx2N type of binary tree-based partition is allowed, and Figure 5B shows an example in which only the 2NxN type of partition based on binary tree is allowed. To implement the adaptive partition based on the quadruple tree or binary tree, you can use the information that indicates the partition based on quadruple tree, the information about the size / depth of the block of coding that the partition based on quadruple tree allows, the information which indicates the partition based on binary tree, the information about the size / depth of the block of coding that allows the partition based on binary tree, the information about the size / depth of the coding block in which the partition based on binary tree, the information on whether the partition based on binary tree is made in a vertical direction or a horizontal direction, etc. [0123] In addition, information can be obtained on the number of times a binary tree partition is allowed, a depth to which binary tree partitioning is allowed, or the number of depths to which binary tree partitioning is allowed for a coding tree unit or a specific coding unit. The information may be encoded in units of a coding tree unit or a coding unit, and may be transmitted to a decoder through a bit stream. [0124] For example, a syntax 'max_binary_depth_idx_minus1' indicating a maximum depth to which the partition of the binary tree is allowed can be encoded / decoded through a bit stream. In this case, max_binary_depth_idx_minus1 1 can indicate the maximum depth at which the partition of the binary tree is allowed. [0125] With reference to the example shown in Figure 6, in Figure 6, the division of the binary tree has been done for a coding unit having a depth of 2 and a coding unit having a depth of 3. Accordingly, at minus one of the data indicating the number of times that has been performed the partition of the binary tree in the unit of the codification tree (ie, 2 times), the information indicating the maximum depth at which the partition of the binary tree was allowed in the unit of the coding tree (ie, the depth 3), or the number of depths in which the partition of the binary tree was made in the unit of the coding tree (ie, 2 (depth 2 and depth 3)) can be encoded / decoded through a flow of bits. [0126] As another example, at least one of the data on the number of times the partition of the binary tree is allowed, the depth at which the partition of the binary tree is allowed, or the number of depths to which the partition is allowed. of the binary tree can be obtained for each sequence or each segment. For example, the information may be encoded in units of a sequence, an image or a segment unit and transmitted through a bit stream. Consequently, at least one of the numbers of the partition of the binary tree in a first segment, the maximum depth in which the partition of the binary tree is allowed in the first segment, or the number of depths in which the partition of the binary tree performed in the first segment can be the difference of a second segment. For example, in the first segment, the binary tree partition can be allowed only for one depth, whereas, in the second segment, the binary tree partition can be allowed for two depths. [0127] As another example, the number of times the partition of the binary tree is allowed, the depth at which the partition of the binary tree is allowed, or the number of depths to which the partition of the binary tree is allowed can be configured in a different according to a time level identifier (TemporallD) of a segment or an image. Here, the temporal level identifier (TemporallD) is used to identify each of a plurality of video layers having a scalability of at least one of view, spatial, temporal or quality. [0128] As shown in Figure 3, the first coding block 300 with the partition depth (segment depth) of k can be divided into multiple second blocks of coding based on the quadruple tree. For example, the second coding blocks 310 to 340 can be square blocks that are half the width and half the height of the first coding block, and the partition depth of the second coding block can be increased to k + 1. . [0129] The second coding block 310 with the partition depth of k + 1 can be divided into multiple third coding blocks with the partition depth of k + 2. The partition of the second coding block 310 can be realized using selectively one of the four trees and the binary tree depending on a partition procedure. Aqw, the partition procedure can be determined based on at least one of the information indicating the partition based on quadruple trees and the information indicating the partition based on binary trees. [0130] When the second coding block 310 is divided according to the quadruple tree, the second coding block 310 can be divided into four third coding blocks 310a having half the width and half of the second coding block, and the partition depth of the third coding block 310a can be increased to k + 2. In contrast, when the second coding block 310 is divided as a function of the binary tree, the second coding block 310 can be divided into two third coding blocks. Here, each of the two third coding blocks can be a non-square block having an average width and half the height of the second coding block, and the depth of the partition can be increased to k + 2. The second block of coding can be determined as a non-square block of a horizontal or vertical direction that depends on a partition address, and the partition address can be determined according to the information on whether the partition based on binary tree is performed in a vertical direction or a horizontal direction. [0131] Meanwhile, the second coding block 310 can be determined as a leaf coding block that is no longer partitioned as a function of the quadruple tree or the binary tree. In this case, the sheet coding block can be used as a prediction block or a transformation block. [0132] Like the partition of the second coding block 310, the third coding block 310a can be determined as a sheet coding block, or it can be further divided based on the quadruple tree or the binary tree. [0133] Meanwhile, the third block of coding 310b partitioned based on the binary tree can be further divided into the coding blocks 310b-2 of a vertical direction or the coding blocks 310b-3 of a horizontal direction based on the binary tree, and the Depth of partition of the relevant coding blocks can be increased to k + 3. Alternatively, the third coding block 310b can be determined as a sheet coding block 310b-1 that is no longer partitioned based on the binary tree. In this case, the coding block 310b-1 can be used as a prediction block or a transformation block. However, the previous partitioning process can be performed from limited form depending on at least one of the information about the size / depth of the coding block that allows the partition based on quadruple trees, information about the size / depth of the coding block of that binary tree based on the partition is allowed , and the information about the size / depth of the coding block of that partition based on binary tree is not allowed. [0134] A number of a candidate representing a size of a coding block may be limited to a predetermined number, or a size of a coding block in a predetermined unit may have a fixed value. As an example, the size of the coding block in a sequence or in an image can be limited to 256x256, 128x128 or 32x32. The information indicating the size of the block of coding in the sequence or in the image can be indicated by a sequence header or an image header. [0135] As a result of the partition based on a quadruple tree and a binary tree, a coding unit can be represented as a square or rectangular shape of an arbitrary size. [0136] A coding block is coded using at least one of the jump, intraprediction, interprediction or jump procedure. Once a block of coding is determined, a block of prediction can be determined through the predictive partition of the coding block. The predictive partition of the coding block can be done by means of a partition mode (Part_mode) that indicates a partition type of the coding block. A size or shape of the prediction block can be determined according to the partition mode of the coding block. For example, a size of a prediction block determined according to the partition mode may be equal to or smaller than a size of a coding block. [0137] Figure 7 is a diagram illustrating a partition mode that can be applied to an encoding block when the coding block is encoded by interprediction. [0138] When a coding block is encoded by interprediction, one of the 8 partition modes can be applied to the coding block, as in the example shown in Figure 4. [0139] When a coding block is encoded by intra-prediction, a partition mode PART_2Nx2N or a partition mode PART_NxN can be applied to the coding block. [0140] PART_NxN can be applied when a coding block has a minimum size. Aqw, the minimum size of the coding block can be predefined in an encoder and a decoder. Or, the information on the minimum size of the coding block can be signaled through a bit stream. For example, the minimum size of the coding block can be signaled through a segment header, so that the minimum size of the coding block can be defined per segment. [0141] In general, a prediction block can have a size of 64 ^ 64 to 4x4. However, when a coding block is encoded by interprediction, it can be restricted that the prediction block does not have a size of 4x4 to reduce the memory bandwidth when motion compensation is performed. [0142] Figure 8 is a diagram illustrating types of predefined intraprediction modes for a device for encoding / decoding a video according to a realization of the present invention. [0143] The device for encoding / decoding a video can perform intra-prediction using one of the predefined intra-prediction modes. Predefined intraprediction modes for intraprediction can include non-directional prediction modes (eg, a flat mode, a DC mode) and 33 directional prediction modes. [0144] Alternatively, to improve the accuracy of the prediction, a greater number of directional prediction modes can be used than the 33 directional prediction modes. That is, the extended directional prediction modes M can be defined by subdividing the angles of the directional prediction modes (M> 33), and a directional prediction mode having a predetermined angle can be derived using at least one of the 33 predefined directional prediction modes. [0145] A greater number of intra-prediction modes than 35 intra-prediction modes shown in Figure 8 can be used. For example, a greater number of intra-prediction modes may be used than the intra-prediction modes by subdividing the angles of the directional prediction modes or by deriving a directional prediction mode having a predetermined angle using at least one of a predefined number of modes of directional prediction. At this time, the use of a greater number of intraprediction modes than the 35 modes of intraprediction can be referred to as an extended intraprediction mode. [0146] Figure 9 shows an example of the extended intraprediction modes, and Extended intra-prediction modes can include two non-directional prediction modes and 65 extended directional prediction modes. The same numbers of the extended intra-prediction modes can be used for a luminance component and a chrominance component, or a different number of intra-prediction modes can be used for each component. For example, 67 extended intra-prediction modes can be used for the luminance component, and 35 intra-prediction modes can be used for the chrominance component. [0147] Alternatively, depending on the chrominance format, a different number of intraprediction modes may be used to perform the intraprediction. For example, in the case of the 4: 2: 0 format, 67 intra-prediction modes can be used for the luminance component to perform the intra-prediction and 35 intra-prediction modes can be used for the chrominance component. In the case of the 4: 4: 4 format, 67 intra-prediction modes can be used, both for the luminance component and for the chrominance component to perform the intra-prediction. [0148] Alternatively, depending on the size and / or shape of the block, a different number of intraprediction modes may be used to perform the intraprediction. That is, depending on the size and / or shape of the PU or CU, 35 modes of intraprediction or 67 modes of intraprediction can be used to perform the intraprediction. For example, when the CU or PU has a size smaller than 64x64 or is asymmetrically divided, 35 intra-prediction modes can be used to perform the intra-prediction. When the size of the CU or PU is equal to or greater than 64x64, 67 intra-prediction modes can be used to perform the intraprediction. 65 ways of directional intra-prediction can be allowed for Intra_2Nx2N, and only 35 modes of directional intra-prediction can be allowed for Intra_NxN. [0149] The size of a block to which the extended intra-prediction mode is applied can be configured differently for each sequence, image or segment. For example, it is stated that the extended intra-prediction mode is applied to a block (e.g., CU or PU) that is larger than 64x64 in the first segment. On the other hand, it is established that the extended intra-prediction mode is applied to a block that has a size greater than 32x32 in the second segment. The information representing a size of a block to which the extended intra-prediction mode is applied can be signaled in units of a sequence, an image or a segment. For example, the information that indicates the size of the block to which the extended intra-prediction mode applies can be defined as 'log2_extended_intra_mode_size_minus4' obtained by taking a logarithm of the size of the block and then subtract the integer 4. For example, if a value of log2_extended_intra_mode_size_minus4 is 0, it may indicate that the extended intraprediction mode can be applied to a block that has a size equal to or greater than 16x16. And if a value of log2_extended_intra_mode_size_minus4 is 1, it can indicate that the extended intraprediction mode can be applied to a block that has a size equal to or greater than 32x32. [0150] As described above, the number of intra-prediction modes can be determined by considering at least one of a color component, a chrominance format and a size or shape of a block. In addition, the number of candidates for intra-prediction mode (eg, the number of MPM) used to determine an intra-prediction mode of a current block for encoding / decoding can also be determined according to at least one of a color component, a color format, and the size or shape of a block. The drawings will describe a method for determining an intra-prediction mode of a current block for encoding / decoding and a method for performing an intra-prediction using the determined intra-prediction mode. [0151] Fig. 10 is a flow diagram briefly illustrating an intraprediction method according to an embodiment of the present invention. [0152] Referring to Figure 10, an intra-prediction mode of the current block can be determined in step S1000. [0153] Specifically, the intra-prediction mode of the current block can be derived based on a list of candidates and an index. Here, the candidate list contains multiple candidates, and the multiple candidates can be determined in function of an intra-prediction mode of the neighboring block adjacent to the current block. The neighboring block can include at least one of the blocks located at the top, the bottom, the left, the right and the corner of the current block. The index can specify one of the multiple candidates from the list of candidates. The candidate specified by the index can be set in the intra-prediction mode of the current block. [0154] An intra-prediction mode used for intra-prediction in the neighboring block can be established as a candidate. In addition, an intra-prediction mode having similar directionality to that of the intra-prediction mode of the neighboring block can be established as a candidate. Here, the intra-prediction mode having a similar directionality can be determined by adding or subtracting a predetermined constant value to or from the intra-prediction mode of the neighboring block. The value The predetermined constant can be an integer number, such as one, two or more. The candidate list may also include a predetermined mode. The default mode can include at least one of a flat mode, a DC mode, a vertical mode and a horizontal mode. The default mode can be added adaptively taking into account the maximum number of candidates that can be included in the candidate list of the current block. [0155] The maximum number of candidates that can be included in the list of candidates can be three, four, five, six or more. The maximum number of candidates that can be included in the list of candidates can be a preset value in the device to encode / decode a video, or it can be determined in a variable manner depending on a characteristic of the current block. The characteristic can mean the location / size / shape of the block, the number / type of intra-prediction modes that the block can use, a type of color, a color format, etc. Alternatively, information indicating the maximum number of candidates that can be included in the candidate list can be noted separately, and the maximum number of candidates that can be included in the candidate list can be determined in a variable way using the information. The information indicating the maximum number of candidates can be indicated in at least one of a sequence level, an image level, a segment level and a block level. [0156] When the extended intra-prediction modes and the predefined intra-prediction modes are selectively used, the intra-prediction modes of the neighboring blocks can be transformed into indices corresponding to the extended intra-predictive modes, or in indices corresponding to the intra-predictive modes 35, thus Candidates can be derived. For transformation to an index, a predefined table can be used, or a scale operation based on a predetermined value can be used. Here, the predefined table can define a mapping relationship between different groups of intraprediction modes (for example, extended intraprediction modes and 35 intraprediction modes). [0157] For example, when the left neighbor block uses the 35 intraprediction modes and the intraprediction mode of the left neighbor block is 10 (a horizontal mode), it can be transformed into an index of 16 corresponding to a horizontal mode in the extended intraprediction modes. [0158] Alternatively, when the upper neighbor block uses the extended intra-prediction modes and the intra-predictive mode, the upper neighboring block has an index of 50 (a vertical mode), it can be transformed into an index of 26 corresponding to a vertical mode in the 35 modes of intraprediction. [0159] Based on the method described above for determining the intra-prediction mode, the intra-prediction mode can be derived independently for each component of the luminance and the chrominance component, or the intra-prediction mode of the chrominance component can be derived depending on the intra-prediction mode of the component. of luminance. [0160] Specifically, the intra-prediction mode of the chrominance component can be determined as a function of the intra-prediction mode of the luminance component as shown in the following Table 1. [0161] T l 11 [0162] [0163] [0164] [0165] [0166] In Table 1, intra_chroma_pred_mode means information signaled to specify the intra-prediction mode of the chrominance component, and IntraPredModeY indicates the intra-prediction mode of the luminance component. [0167] With reference to Figure 10, a reference sample can be derived for the intra-prediction of the current block in step S1010. [0168] Specifically, a reference sample for intraprediction can be derived based on a neighbor sample of the current block. The neighbor sample may be a reconstructed sample of the neighbor block, and the reconstructed sample may be a reconstructed sample before a loop filter or reconstructed sample is applied after the loop filter is applied. [0169] A neighboring sample reconstructed before the current block can be used as a reference sample, and a neighbor sample filtered on the basis of a predetermined internal filter can be used as a reference sample. Filtering neighboring samples using an intra filter can also be referred to as smoothing reference samples. The intra-filter can include at least one of the first intra-filters applied to multiple neighboring samples located on the same horizontal line and the second intra-applied filter to multiple neighboring samples located on the same line vertical. Depending on the positions of the neighboring samples, one of the first intra filter and the second intra filter can be selectively applied, or both intra filters can be applied. At this time, at least one filter coefficient of the first intra filter or the second intra filter can be (1, 2, 1), but is not limited thereto. [0170] The filtering can be performed adaptively as a function of at least one of the intra-prediction modes of the current block and the size of the transformation block for the current block. For example, when the intra-predictive mode of the current block is DC mode, vertical mode or horizontal mode, filtering may not be performed. When the size of the transformation block is NxM, filtering can not be performed. Here, N and M can be the same values or different values, or they can be values of 4, 8, 16 or more. For example, if the size of the transformation block is 4x4, filtering can not be performed. Alternatively, the filtering can be performed selectively as a function of the result of a comparison of a predefined threshold and the difference between the intra-prediction mode of the current block and the vertical mode (or the horizontal mode). For example, when the difference between the intra-prediction mode of the current block and the vertical mode is greater than a threshold, filtering can be performed. The threshold can be defined for each size of the transformation block as shown in Table 2. [0171] [0172] [0173] [0174] [0175] The internal filter can be determined as one of the multiple internal filter candidates predefined in the device to encode / decode a video. For this purpose, an index specifying an internal filter of the current block can be indicated among the multiple internal filter candidates. Alternatively, the intra filter can be determined as a function of at least one of the sizes / shapes of the current block, the size / shapes of the transformation block, the information on the intensity of the filter and the variations of the neighboring samples. [0176] Referring to Figure 10, the intra-prediction can be performed using the intra-prediction mode of the current block and the reference sample in step S1020. [0177] That is, the prediction sample of the current block can be obtained using the intra-prediction mode determined in step S1000 and the reference sample derived in step S1010. However, in the case of intra-prediction, a delimiting sample of the neighboring block may be used and, therefore, the quality of the prediction image may decrease. Therefore, a correction process can be performed on the prediction sample generated through the prediction process described above, and will be described in detail with reference to figures 11 to 13. However, the correction process is not limited to be applied only to the sample of intraprediction, and may be applied to a sample of interprediction or the reconstructed sample. [0178] Figure 11 is a diagram illustrating a method of correcting a prediction sample of a current block based on differential information of neighboring samples according to an embodiment of the present invention. [0179] The prediction sample of the current block can be corrected based on the differential information of multiple neighboring samples for the current block. The correction can be made in all the prediction samples in the current block, or it can be done in prediction samples in predetermined partial regions. The partial regions may be a row / column or multiple rows / columns, and these may be preset regions for correction in the device to encode / decode a video. For example, the correction can be made in a row / column located in a limit of the current block or it can be made in a plurality of rows / columns from a limit of the current block. Alternatively, the partial regions can be determined variably depending on at least one of the sizes / shapes of the current block and the intra-prediction mode. [0180] The neighboring samples can belong to the neighboring blocks located in the upper part, left and in the upper left corner of the current block. The number of neighboring samples used for correction can be two, three, four or more. The positions of the neighboring samples can be determined variably depending on the position of the prediction sample which is the correction target in the current block. Alternatively, some of the neighboring samples may have fixed positions regardless of the position of the prediction sample which is the target of correction, and the remaining neighboring samples may have variable positions depending on the position of the prediction sample which is the target of correction. [0181] The differential information of neighboring samples can mean a differential sample between neighboring samples, or it can mean a value obtained by scaling the differential sample by a predetermined constant value (for example, one, two, three, etc.). Here, the predetermined constant value can be determined by considering the position of the prediction sample that is the correction objective, the position of the column or row that includes the prediction sample that is the correction objective, the position of the sample of prediction within the column or row, etc. [0182] For example, when the intra-prediction mode of the current block is vertical mode, the differential samples between the upper left neighbor sample p (-1, -1) and the neighboring samples p (-1, y) adjacent to the left end of the block Current can be used to obtain the final prediction sample as shown in Equation 1. [0183] [Equation 1] [0184] P (0, y) = P (0, y) + ((p (-1, y) -p (-1, -1) >> 1 for y = 0 ... N-1 For example, when the The current block's intra-prediction mode is horizontal mode, the differential samples between the upper-left neighbor sample p (-1, -1) and the neighboring samples p (x, -1) adjacent to the upper limit of the current block can be used to obtain the final prediction sample as shown in Equation 2. [0185] [Equation 2] [0186] P (x, 0) = p (x, 0) + ((p (x, -1) -p (-1, -1) >> 1 for x = 0 ... N-1 For example, when the intra-prediction mode of the current block is the vertical mode, the differential samples between the upper left neighbor sample p (-1, -1) and the neighboring samples p (-1, y) adjacent to the left limit, the current block can be used To obtain the final prediction sample, the differential sample can be added to the prediction sample, or the differential sample can be scaled to a predetermined constant value, and then added to the prediction sample. the scaling can be determined differently depending on the column and / or row For example, the prediction sample can be corrected as shown in Equation 3 and in Equation 4. [0187] [Equation 3] [0188] P '(0, y) = P (0, y) + ((p (-1, y) -p (-1, -1) >> 1 for y = 0 ... N-1 [0189] [Equation 4] [0190] P (1, y) = P (1, y) + ((P (-1, y) -P (-1, -1) >> 2 for y = 0 ... N-1 For example, when the The intra-prediction mode of the current block is the horizontal mode, the differential samples between the upper left neighbor sample p (-1, -1) and the neighboring samples p (x, -1) adjacent to the upper limit of the current block can be used to get the final prediction sample, as described in the case in vertical mode. For example, the prediction sample can be corrected as shown in Equation 5 and Equation 6. [0191] [Equation 5] [0192] F (x, 0) = p (x, 0) + ((p (x, -1) -p (-1, -1) >> 1 for x = 0 ... N-1 [0193] [Equation 6] [0194] P (x, 1) = p (x, 1) + ((p (x, -1) -p (-1, -1) >> 2 for x = 0 ... N-1 [0195] Figures 12 and 13 are diagrams illustrating a method for correcting a prediction sample based on a predetermined correction filter according to an embodiment of the present invention. [0196] The prediction sample can be corrected based on the neighbor sample of the prediction sample which is the correction target and a predetermined correction filter. Here, the neighboring sample can be specified by an angular line of the directional prediction mode of the current block, or it can be at least one sample placed on the same angular line as the prediction sample which is the correction target. In addition, the neighbor sample can be a prediction sample in the current block, or it can be a reconstructed sample in a neighboring block reconstructed before the current block. [0197] At least one of the number of takes, the intensity and the filter coefficient of the correction filter can be determined as a function of at least one of the positions of the prediction sample that is the correction target, regardless of whether the prediction sample is the target of correction is placed in the limit of the current block, the intra-prediction mode of the current block, the angle of the directional prediction mode, the prediction mode (inter or intra mode) of the neighboring block and the size / shape of the block current. [0198] Referring to Figure 12, when the directional prediction mode has an index of 2 or 34, at least one predicted / reconstructed sample placed in the lower left of the prediction sample which is the correction target and the Default correction can be used to obtain the final prediction sample. Here, the predicted / reconstructed sample in the lower left can belong to a previous line of a line that includes the prediction sample that is the correction target. The predicted / reconstructed sample in the lower left can belong to the same block as the current sample, or to the adjacent block next to the current block. [0199] The filtering of the prediction sample can be done only on the line positioned at the limit of the block, or it can be done on several lines. It can use the correction filter where at least one of the number of filter intakes and a filter coefficient is different for each of the lines. For example, a filter (1/2, 1/2) can be used for the first left line closest to the block limit, a filter (12/16, 4/16) can be used for the second line, it can be use a filter (14/16, 2/16) for the third line, and a filter (15/16, 1/16) can be used for the fourth line. [0200] Alternatively, when the directional prediction mode has an index of 3 to 6 or 30 to 33, the filtering can be performed at the limit of the block as shown in Figure 13, and a 3-shot correction filter can be used. to correct the prediction sample. The filtering can be done using the lower left sample of the prediction sample, which is the correction target, the lower sample of the lower left sample, and a 3-shot correction filter that takes as input the sample of prediction that is the goal of correction. The position of the neighbor sample used by the correction filter can be determined differently according to the directional prediction mode. The filter coefficient of the correction filter can be determined differently depending on the directional prediction mode. [0201] Different correction filters can be applied depending on whether the neighboring block is encoded in inter mode or intra mode. When the neighbor block is encoded in the intra mode, a filtering method can be used where more weighting is given to the prediction sample, as compared to when the neighboring block is encoded in the inter mode. For example, in the case that the intraprediction mode is 34, when the neighboring block is coded in the inter mode, a filter (1/2, 1/2) can be used, and when the neighboring block is coded in the intra mode, a filter can be used (4/16, 12/16). [0202] The number of lines to be filtered in the current block may vary depending on the size / shape of the current block (for example, the block of codification or the block of prediction). For example, when the size of the current block is equal to or less than 32x32, the filtering can be done in a single line at the block boundary; otherwise, the filtering can be done in several lines, including a line at the block limit. [0203] Figures 12 and 13 are based on the case where the intra-prediction modes in Figure 7 are used, but can be applied in the same or similar manner to the case in which the extended intra-prediction modes are used. [0204] Figure 14 shows a range of reference samples for intraprediction according to a realization to which the present invention is applied. [0205] The interprediction of a current block can be done using a reference sample derived from a reconstructed sample included in a neighboring block. Here, the reconstructed sample means that the coding / decoding is completed before encoding / decoding the current block. For example, the intra-prediction for the current block can be made based on at least one of the reference samples P (-1, -1), P (-1, y) (0 <= and <= 2N-1) and P (x, -1) (0 <= x <= 2N-1). At this time, the filtering in reference samples is performed selectively as a function of at least one of an intra-prediction mode (eg, index, directionality, angle, etc. of the intra-prediction mode) of the current block or the size of the block. a transformation block related to the current block. [0206] The filtering of the reference samples can be done using a predefined intra filter in an encoder and a decoder. For example, an intra filter with a filter coefficient of (1,2,1) or an intra filter with a filter coefficient of (2,3,6,3,2) can be used to obtain final reference samples for use in prediction. [0207] Alternatively, at least one of a plurality of intra filter candidates can be selected to perform filtering in reference samples. In this case, the plurality of intra filter candidates may differ from each other in at least one of a filter intensity, a filter coefficient or a derivation number (eg, a number of filter coefficients, a filter length) . A plurality of intra filter candidates can be defined in at least one of a sequence, an image, a segment or a block level. That is, a sequence, an image, a segment or a block in which the current block is included can use the same plurality of internal filter candidates. [0208] Hereinafter, for convenience of explanation, it is assumed that a plurality of intra filter candidates includes a first intra filter and a second intra filter. It is also assumed that the first intra filter is a 3-shot filter (1, 2, 1) and the second intra filter is a 5-shot filter (2, 3, 6, 3, 2). [0209] When the reference samples are filtered by applying a first intra filter, the filtered reference samples can be derived as shown in Equation 7. [0210] [Equation 7] [0211] P (- l, - l) = (P (- l, 0) + 2 P (- l, - l) + P (0, - l) + 2) »2 P (- 1 j,) = (P ( - 1 , y 1) + 2 P (- 1 , y) + P (- l, y - l) 2) »2 P (x, ~ 1) = (P ( jc + 11) + 2 P (x , ~ 1) PO - 11) 2) »2 [0212] When the reference samples are filtered by applying the second intra filter, the filtered reference samples can be derived as shown in the following equation 8. [0213] [Equation 8] [0214] P (-l, - l) = (2P (-2,0) + 3P (-l, 0) + 6P (-l, -l) + 3P (0, -l) + 2P (0, -2) +8) »4 P (-1 ^) = (2P (-15ly + 2) + 3P (-15j; +1) + 6P (-1 ^) + 3P (-1 ô -1) + 2P (-1 sly -2) + 8) »4 P ( j , -1) = (2P ( x + 2, -1) + 3P ( x + 1, -1) + 6P ( x , -1) + 3P ( j - 1, -1) + 2P ( x -2, -1) +8) »4 In previous equations 7 and 8, x can be an integer between 0 and 2N-2, and y can be an integer between 0 and 2N-2. [0215] Alternatively, depending on the position of a reference sample, one of a plurality of intra filter candidates can be determined, and filtering on the reference sample can be performed using the determined one. For example, a first intra filter can be applied to the reference samples included in a first range, and a second intra filter can be applied to the reference samples included in a second range. Here, the first rank and the second rank can be distinguished depending on whether they are adjacent to a limit of a current block, whether they are located on the upper side or on the left side of a current block, or if they are adjacent to a corner of a current block. For example, as shown in Figure 15, filtering on reference samples (P (-1, -1), P (-1.0), P (-1.1), ..., P (1, N-1) and P (0, -1), P (1, -1), ...) that are adjacent to a limit of the current block is performed by applying a first intra filter as shown in Equation 7, and the filtering in the other reference samples that are not adjacent to a limit of the current block is performed by applying a second reference filter as shown in Equation 8. It is possible to select one of a plurality of internal filter candidates according to the type of transformation used for a current block, and perform filtering on reference samples using the selected one. Here, the type of transformation can mean (1) a transformation scheme such as DCT, DST or KLT, (2) a mode indicator of transformation such as a 2D transformation, a 1D transformation or no transformation or (3) the number of transformations such as a first transform and a second transform. From here on, for convenience of the description, it is assumed that the type of transformation means the transformation scheme such as DCT, DST and KLT. [0216] For example, if a current block is coded using a DCT, filtering can be done using a first internal filter, and if a current block is coded using a DST, filtering can be done using a second internal filter. Or if a current block is coded using DCT or DST, filtering can be done using a first internal filter, and if the current block is coded using a KLT, filtering can be done using a second internal filter. [0217] Filtering can be done using a filter selected based on a transformation type of a current block and a position of a reference sample. For example, if a current block is coded using a DCT, filtering in the reference samples P (-1, -1), P (-1.0), P (-1.1), ..., P ( -1, N-1) and P (0, -1), P (1, -1), ..., P (N-1, -1) can be performed using a first intra filter, and filtering in Other reference samples can be made using a second intra filter. If a current block is coded using a DST, filtering in the reference samples P (1, -1), P (-1,0), P (-1,1), ..., P (-1, N) -1) and P (0, -1), P (1, -1), ..., P (N-1, -1) can be performed using a second intra filter, and filtering in other reference samples It can be done using a first intra filter. [0218] One of a plurality of internal filter candidates can be selected depending on whether a transformation type of a neighboring block that includes a reference sample is the same as a transformation type of a current block, and the filtering can be done using the internal filter candidate selected. For example, when a current block and a neighboring block use the same type of transformation, the filtering is done using a first internal filter, and when the transformation types of a current block and a neighboring block are different from each other, the second Internal filter can be used to perform filtering. [0219] It is possible to select any of a plurality of intra filter candidates depending on the type of transformation of a neighboring block and perform filtering on a reference sample using the selected one. That is, a specific filter can be selected taking into account the type of transformation of a block in which a reference sample is included. For example, as shown in Figure 16, if a block adjacent to the left / down to the left of a current block is a block coded using a DCT, and a block adjacent to the top / top right of a current block is a block encoded using a DST, the filtering of reference samples adjacent to the left / down to the left of a current block is done by applying a first internal filter and filtering in the reference samples adjacent to the top / top right of a current block is done by applying a second internal filter. [0220] In units of a predetermined region, a usable filter can be defined in the corresponding region. Here, the unit of the default region can be any of a sequence, an image, a sector, a group of blocks (for example, a row of units of coding tree) or a block (for example, a unit of coding tree) Or, another region can be defined that share one or more filters A reference sample can be filtered using a filter assigned to a region in which a current block is included. [0221] For example, as shown in Figure 17, it is possible to perform filtering on reference samples using different filters in CTU units. In this case, the information indicating whether the same filter is used in a sequence or in an image, a filter type used for each CTU, an index specifying a filter used in the corresponding CTU between an intra-available filter candidate, it can be signaled through a set of sequence parameters (SPS) or a set of image parameters (PPS). [0222] The internal filter described above can be applied in units of a coding unit. For example, filtering can be performed by applying a first intra filter or a second intra filter to reference samples around a coding unit. [0223] When a directional prediction mode or a DC mode is used, a deterioration in image quality may occur at a block limit. On the other hand, in the flat mode, there is an advantage that the deterioration of the image quality at the block boundary is relatively small compared to the previous prediction modes. [0224] The flat prediction can be made by generating a first prediction image in a horizontal direction and a second prediction image in a vertical direction using reference samples and then making a weighted prediction of the first prediction image and the second prediction image. [0225] Here, the first prediction image can be generated based on reference samples that are adjacent to the current block and placed in the horizontal direction of a prediction sample. For example, the first prediction image can be generated based on a weighted sum of reference samples located in the horizontal direction of the prediction sample, and a weight applied to each of the reference samples can be determined as a function of the distance from an objective sample of prediction or a size of the current block. Samples positioned in the horizontal direction can include a left reference sample located on the left side of the prediction target sample and a right reference sample located on the right side of the target sample of prediction. At this time, the correct reference sample can be derived from a top reference sample of the current block. For example, the correct reference sample can be derived by copying a value from one of the higher reference samples, or it can be obtained by a weighted sum or an average value of the higher reference samples. Aqw, the top reference sample may be a reference sample located on the same vertical line as the correct reference sample, and may be a reference sample adjacent to a top right corner of the current block. Alternatively, the position of the upper reference sample can be determined differently depending on the position of the target prediction sample. [0226] The second prediction image can be generated based on reference samples that are adjacent to the current block and located in a vertical direction of a prediction sample. For example, the second prediction image can be generated based on a weighted sum of reference samples located in the vertical direction of the prediction sample, and a weight applied to each of the reference samples can be determined as a function of the distance from an objective sample of prediction or a size of the current block. Samples located in the vertical direction may include a higher reference sample located on the upper side of the prediction target sample and a lower reference sample located on the lower side of the prediction target sample. At this time, the lower reference sample can be derived from a left reference sample of the current block. For example, the lower reference sample can be derived by copying a value from one of the left reference samples, or it can be derived by a weighted sum or an average value of the left reference samples. Here, the left reference sample may be a reference sample located in the same horizontal line as the lower reference sample, and may be a reference sample adjacent to a lower left corner of the current block. Alternatively, the position of the upper reference sample can be determined differently depending on the position of the prediction target sample. [0227] As another example, it is also possible to derive the correct reference sample and the lower reference sample using a plurality of reference samples. [0228] For example, the right reference sample or the lower reference sample can be derived using the upper reference sample and the sample from left reference of the current block. For example, at least one of the right reference sample or the lower reference sample can be determined as a weighted sum or an average of the upper reference sample and the left reference sample of the current block. [0229] Alternatively, we can calculate the weighted sum or the average of the upper reference sample and the left reference sample of the current block, and then we can derive the right reference sample from the weighted sum or the average value of the calculated value and the Top reference sample. If the correct reference sample is derived by calculating the weighted sum of the calculated value and the upper reference sample, the weighting can be determined by considering the size of the current block, the shape of the current block, the position of the sample of correct reference, or a distance between the correct reference sample and the upper reference sample. [0230] In addition, after calculating the weighted sum or the average of the upper reference sample and the left reference sample of the current block, the lower reference sample can be derived from the weighted sum or the average value of the calculated value and the reference sample left. If the right reference sample is derived through the weighted sum of the calculated value and the left reference sample, the weighting can be determined by considering the size of the current block, the shape of the current block, the position of the lower reference sample, or a distance between the lower reference sample and the left reference sample. [0231] The positions of multiple reference samples used to derive the right reference sample or the left reference sample may be fixed or may vary depending on the position of a prediction target sample. For example, the upper reference sample may have a fixed position such as a reference sample adjacent to the upper right corner of the current block and located in the same vertical line as the right reference sample, and the left reference sample may have a fixed position such as a reference sample adjacent to a lower left corner of the current block and located on the same horizontal line as the lower reference sample. Alternatively, when the correct reference sample is obtained, the upper reference sample having a fixed location is used, as a reference sample adjacent to the upper right corner of the current block, while the left reference sample is used as a reference sample. reference sample located on the same horizontal line as the shows objective of prediction. When the lower reference sample is obtained, the left reference sample having a fixed location is used, as a reference sample adjacent to the lower left corner of the current block, while the upper reference sample is used as a sample of reference. reference located in the same vertical line as the objective sample of prediction. [0232] Figure 18 is a diagram showing an example of deriving a right reference sample or a lower reference sample using a plurality of reference samples. It will be assumed that a current block is a block that has a size of WxH. [0233] With reference to (a) of Figure 18, first, a lower right reference sample P (W, H) based on a weighted sum or an average value of a higher reference sample P (W, -1) can be generated. and a left reference sample P (-1, H) of the current block. And, a right reference sample P (W, y) can be generated for a prediction target sample (x, y) based on the lower right reference sample P (W, H) and the upper reference sample P (W) , -one). For example, the correct prediction sample P (W, y) can be calculated as a weighted sum or an average value of the lower right reference sample P (W, H) and the upper reference sample P (W, -1) . In addition, a lower reference sample P (x, H) can be generated for the prediction target sample (x, y) based on the lower right reference sample P (W, H) and a left reference sample P (- 1 HOUR ). For example, the lower reference sample P (x, H) can be calculated as a weighted sum or an average value of the lower right reference sample P (W, H) and the left reference sample P (-1, H) . [0234] As shown (b) of Figure 18, if the correct reference sample and the lower reference sample are generated, a first prediction sample Ph (x, y) and a second prediction sample Pv (x, y) for the objective block of prediction based on the generated reference samples. At this time, the first prediction sample Ph (x, y) can be generated based on a weighted sum of the left reference sample P (-1, y) and the right reference sample P (W, y) and the second prediction sample can be generated based on a weighted sum of the upper reference sample P (x, -1) and the lower reference sample P (x, H). [0235] The positions of the reference samples used to generate the first prediction image and the second prediction image may vary according to the size or the shape of the current block. In other words, the positions of the upper reference sample or the left reference sample used to derive the right reference sample or the lower reference sample may vary according to the size or shape of the current block. [0236] For example, if the current block is a square block of size NxN, the correct reference sample can be derived from P (N, -1) and the lower reference sample can be derived from P (-1, N). Alternatively, the correct reference sample and the lower reference sample can be derived based on at least one of a weighted sum, an average value, a minimum value or a maximum value of P (N, -1) and P (-1 , N). On the other hand, if the current block is a non-square block, the positions of the reference samples used to derive the correct reference sample and the lower reference samples can be determined differently, depending on the shape of the current block. [0237] Figures 19 and 20 are diagrams for explaining a determination of a right reference sample and a lower reference sample for a non-square block, according to an embodiment of the present invention. [0238] As in the example shown in Figure 19, when the current block is a non-square block of size (N / 2) * N, a correct reference sample is derived based on a higher reference sample P (N / 2, -1), and a lower reference sample is derived based on a left reference sample P (-1, N). [0239] Alternatively, the correct reference sample or the lower reference sample may be derived based on at least one of a weighted sum, an average value, a minimum value or a maximum value of the upper reference sample P (N / 2, -1 ) and the left reference sample P (-1, N). For example, the correct reference sample can be derived as a weighted sum or an average of P (N / 2, -1) and P (-1, N), or it can be derived as a weighted sum or an average from the previous calculated value and the upper reference sample. Alternatively, the lower reference sample can be derived as a weighted sum or an average of P (N / 2, -1) and P (-1, N), or it can be derived as a weighted sum or an average of the previous value calculated and the left reference sample. [0240] On the other hand, as in the example shown in Figure 20, if the current block is a non-square block of size Nx (N / 2), the correct reference sample can be derived based on the upper reference sample P (N, -1) and the lower reference sample can be based on the left reference sample P (1, N / 2). [0241] Alternatively, it is also possible to derive the correct reference sample or the lower reference sample based on at least one of a weighted sum, an average value, a minimum value or a maximum value of the upper reference sample P (N, - 1) and the left reference sample P (-1, N / 2). For example, the correct reference sample can be derived as a weighted sum or an average of P (N, -1) and P (-1, N / 2), or it can be derived as a weighted sum or an average of the previous calculated value and the upper reference sample. Alternatively, the lower reference sample can be derived as a weighted sum or an average of P (N, -1) and P (-1, N / 2), or it can be derived as a weighted sum or an average of the previous value calculated and the left reference sample. [0242] Namely, the lower reference sample can be derived based on at least one of the lower left reference sample of the current block located in the same horizontal line as the lower reference sample or the upper right reference sample of the current block located in the same vertical line as the correct reference sample, and the correct reference sample can be derived based on at least one of the upper right reference sample of the current block located on the same vertical line as the right reference sample or the sample reference lower left of the current block located on the same horizontal line as the lower reference sample. [0243] The first prediction image can be calculated based on a weighted prediction of reference samples located on the same horizontal line as the target prediction sample. In addition, the second prediction image can be calculated based on a weighted prediction of reference samples located in the same vertical line as the target prediction sample. [0244] Alternatively, it is also possible to generate the first prediction image or the second prediction image based on an average value, a minimum value or a maximum value of reference samples. [0245] A method of deriving a reference sample or a method of deriving the first prediction image or the second prediction image can be configured differently depending on whether the target prediction sample is included in a predetermined area in the current block, one size or a current block form. Specifically, depending on the position of the objective sample of prediction, the number or positions of the reference samples used for deriving the right or bottom reference sample can be determined differently, or depending on the position of the prediction target sample, the weighting or the number of reference samples used to derive the first prediction image or the second prediction image they can be configured differently. [0246] For example, a correct reference sample used to derive the first prediction image of the prediction target samples included in the predetermined region can be derived using only the top reference sample, and a correct reference sample used to derive the first image from Prediction of target prediction samples outside of the predetermined region can be derived based on a weighted sum or an average of a higher reference sample and a left reference sample. [0247] For example, as in the example shown in Figure 19, when the current block is a non-square block whose height is longer than a width, the correct reference sample of the prediction target sample located at (x, y) e included in the default region of the current block can be derived from P (N / 2, -1). On the other hand, the right reference sample of the prediction target sample located at (x ', y') and outside the predetermined region in the current block can be derived based on a weighted sum or an average value of P (N / 2, -1) and P (-1, N). [0248] Alternatively, as in the example shown in Figure 20, when the current block is a non-square block whose width is greater than a height, a lower reference sample of the prediction target sample located at (x, y) and included in the default region in the current block can be based on P (-1, N / 2). On the other hand, a lower reference sample of the prediction target sample located in (x ', y') and outside the predetermined region in the current block can be derived based on a weighted sum or an average value of P (N). , -1) and P (-1, N / 2). [0249] For example, the first prediction image or the second prediction image for the prediction target samples included in the predetermined region can be generated based on the weighted sum of the reference samples. On the other hand, the first prediction image or the second prediction image for the prediction target samples outside the predetermined region can be generated by an average value, a minimum value or a maximum value of reference samples or can be generated using only one of samples of reference located in a predetermined position. For example, as shown in an example in Figure 19, if the current block is a non-square block whose height is longer than a width, the first prediction image for the prediction target sample located in (x, y) and included in the predetermined region can be generated using only one of a right reference sample P (N / 2, y) derived from P (N / 2, -1) or a left reference sample located in P (-1, Y). On the other hand, the first prediction image for the prediction target sample located at (x ', y') and outside the predetermined region can be generated based on a weighted sum or an average of a right reference sample P ( N / 2, y ') derived from P (N / 2, -1) and a reference sample located at P (-1, y'). [0250] Alternatively, as in an example shown in Figure 20, if the current block is a non-square block whose width is greater than a height, the second prediction image for the prediction target sample located at (x, y) and included in the default region of the current block can be generated by using only one of a lower reference sample P (x, N / 2) derived from P (-1, N / 2) or a higher reference sample located in P (x, - one). On the other hand, the second prediction image for the prediction target sample located at (x ', y') and not included in the predetermined region can be generated based on a weighted sum or an average of a lower reference sample P ( x ', N / 2) derived from P (-1, N / 2) and a reference sample located at P (-1, y'). [0251] In the embodiment described above, the predetermined region or an external region of the predetermined region may include a remaining region that excludes samples located in a boundary of the current block. The limit of the current block may include at least one of a left boundary, a boundary right, an upper bound or a lower boundary. In addition, the number or position of the boundaries included in the predetermined region or the outer region of the predetermined region can be set differently according to the shape of the current block. [0252] In flat mode, the final prediction image may be derived as a function of a weighted sum, an average value, a minimum value or a maximum value of the first prediction image and the second prediction image. [0253] For example, the following Equation 9 shows an example of the generation of the final prediction image P based on the weighted sum of the first prediction image Ph and the second prediction image Pv. [0254] [0255] [0256] In Equation 9, the prediction weighting w may be different according to a shape or a size of the current block, or a position of the prediction target sample. [0257] For example, the prediction weighting w can be derived by considering a current block width, a current block height or a relation between width and height. If the current block is a non-square block whose width is greater than the height, w can be configured so that more weight is given to the first prediction image. On the other hand, if the current block is a non-square block whose height is greater than the width, w can be configured so that more weight is given to the second prediction image. [0258] For example, when the current block has a square shape, the prediction weight w can have a value of 1/2. On the other hand, if the current block is a non-square block whose height is greater than the width (for example, (N / 2) xN), the prediction weight w can be set to 1/4, and if the current block is a non-square block whose width is greater than the height (for example, Nx (N / 2)), the prediction weight w can be set to 3/4. [0259] Figure 21 is a flow diagram illustrating processes for obtaining a residual sample according to an embodiment to which the present invention is applied. [0260] First, a residual coefficient of a current block S2110 can be obtained. A decoder can obtain a residual coefficient through a coefficient exploration procedure. For example, the decoder can perform a diagonal scan, a coefficient scan using a jig-zag scan, a top right scan, a vertical scan or a horizontal scan, and can obtain residual coefficients in the form of a two-dimensional block. [0261] An inverse quantization can be performed on the residual coefficient of the current block S2120. [0262] It is possible to determine if an inverse transformation should be omitted in the residual decay coefficient of the current block S2130. Specifically, the decoder can determine whether the inverse transformation should be omitted in at least one of the horizontal or vertical direction of the current block. When the application of the inverse transformation is determined in at least one of the horizontal or vertical directions of the current block, a residual sample of the current block can be obtained by inversely transforming the dequantized residual coefficient of the current block S2140. Here, the inverse transformation can be done using at least one of DCT, DST and KLT. [0263] When the inverse transform is omitted in both the horizontal and vertical directions of the current block, the inverse transform is not performed in the horizontal and vertical direction of the current block. In this case, the residual sample of the current block can be obtained by scaling the dequantized residual coefficient with a predetermined value S2150. [0264] Jumping the inverse transform in the horizontal direction means that the inverse transform is not done in the horizontal direction, but the inverse transform is done in the vertical direction. At this time, the scale can be made in the horizontal direction. [0265] Bypassing the inverse transform in the vertical direction means that the inverse transform is not done in the vertical direction, but the inverse transform is done in the horizontal direction. At this time, the scale can be performed in the vertical direction. [0266] It can be determined whether or not a reverse transformation jump technique can be used for the current block, depending on the type of partition of the current block. For example, if the current block is generated through a partition based on a binary tree, the reverse transformation skip scheme may be restricted for the current block. Therefore, when the current block is generated through the binary tree-based partition, the residual sample of the current block can be obtained by inverse transformation of the current block. Furthermore, when the current block is generated through the binary tree-based partition, the encoding / decoding of the information indicating whether the reverse transformation is omitted (eg, transform_skip_flag) can be omitted. [0267] Alternatively, when the current block is generated through the binary tree-based partition, it is possible to limit the inverse transformation jump scheme to at least one of the horizontal or vertical direction. Here, the direction in which the reverse transform hopping scheme is limited can be determined as a function of the decoded bitstream information, or it can be determined adaptively as a function of at least one of a size of the current block, a form of the current block, or an intraprediction mode of the current block. [0268] For example, when the current block is a non-square block that has a width greater than a height, the reverse transformation skip scheme can be allowed only in the vertical direction and restricted in the horizontal direction. That is to say, when the current block is 2NxN, the inverse transformation is performed in the horizontal direction of the current block, and the inverse transformation can be performed selectively in the vertical direction. [0269] On the other hand, when the current block is a non-square block that has a height greater than a width, the reverse transformation skip scheme can be allowed only in the horizontal direction and restricted in the vertical direction. That is, when the current block is Nx2N, the inverse transformation is performed in the vertical direction of the current block, and the inverse transformation can be performed selectively in the horizontal direction. [0270] In contrast to the previous example, when the current block is a non-square block that has a width greater than a height, the reverse transformation skip scheme can be allowed only in the horizontal direction, and when the current block is a non-square block having a height greater than a width, the reverse transformation jump scheme can be allowed only in the vertical direction. [0271] The information that indicates whether or not to omit the inverse transform with respect to the horizontal direction or the information that indicates whether the inverse transformation with respect to the vertical direction should be omitted can be signaled through a bit stream. For example, the information that indicates whether or not to reverse transform the horizontal direction is a 1-bit indicator, 'hor_transform_skip_flag', and the information that indicates whether the inverse transformation in the vertical direction should be omitted is an indicator 1 bit 'ver_transform_skip_flag'. The encoder can encode at least one of 'hor_transform_skip_flag' or 'ver_transform_skip_flag' according to the shape of the current block. In addition, the decoder can determine whether or not the reverse transformation is omitted in the horizontal direction or in the vertical direction using at least one of "hor_transform_skip_flag" or "ver_transform_skip_flag". [0272] It can be configured to omit the inverse transformation for any address of the current block depending on the type of partition of the current block. For example, if the current block is generated through a partition based on a binary tree, the inverse transformation in the horizontal or vertical direction can be omitted. That is, if the current block is generated by a partition based on a binary tree, it can be determined that the inverse transformation for the current block is omitted in at least one horizontal or vertical address without coding / decoding information (for example, transform_skip_flag, hor_transform_skip_flag, ver_transform_skip_flag) which indicates whether the reverse transformation of the current block is omitted or not. [0273] Although the embodiments described above have been described on the basis of a series of stages or flow diagrams, they do not limit the order of the time series of the invention, and may be performed simultaneously or in different orders as necessary. In addition, each of the components (e.g., units, modules, etc.) that constitute the block diagram in the embodiments described above can be implemented by a hardware or software device, and a plurality of components. Or a plurality of components can be combined and implemented by a single hardware or software device. The embodiments described above can be implemented in the form of program instructions that can be executed through various computer components and recorded on a computer readable recording medium. The computer readable recording medium may include one of or a combination of program commands, data files, data structures and the like. Examples of computer readable media include magnetic media such as hard disks, floppy disks and magnetic tape; optical recording media such as CD-ROM and DVD; magnetooptical means such as optical discs; and hardware devices specially configured to store and execute program instructions, such as ROM, RAM, flash memory and the like. The hardware device can be configured to operate as one or more software modules to perform the process according to the present invention, and vice versa. [0274] Industrial applicability [0275] The present invention can be applied to electronic devices that can encode / decode a video.
权利要求:
Claims (15) [1] 1. A procedure for the decoding of a video, comprising the procedure: derive a reference sample for a current block; determine a mode of intra-prediction of the current block; Y obtain a prediction sample for the current block using the reference sample and the intraprediction mode, wherein, if the intra-prediction mode is the flat mode, the prediction sample is generated based on a first prediction sample generated using at least one of the reference samples, located in the same horizontal line as an objective sample of prediction and a second prediction sample generated using at least one of the reference samples, located in the same vertical line as the objective sample of prediction. [2] The method of claim 1, wherein the reference sample comprises a top reference sample and a left reference sample adjacent to the current block, wherein the first prediction sample is generated using at least one right reference sample derived on the basis of the left reference sample and the upper reference sample, and wherein the second prediction sample is generated using at least one lower reference sample derived based on the upper reference sample and the left reference sample. [3] The method of claim 2, wherein a position of the upper reference sample that is used to derive the correct reference sample or a position of the left reference sample that is used to derive the lower reference sample, it is determined adaptively depending on a size or shape of the current block. [4] The method of claim 2, wherein the first prediction sample is generated based on a weighted sum between the left reference sample and the right reference sample, and the second prediction sample is generated based on a sum weighted between the upper reference sample and the sample of bottom reference. [5] The method of claim 2, wherein several reference samples used to derive the first prediction sample or the second prediction sample are determined differently, depending on a position of the prediction target sample. [6] 6. The method of claim 1, wherein the prediction sample is generated based on a weighted sum of the first prediction sample and the second prediction sample. [7] The method of claim 6, wherein the weights for the first prediction sample and the second prediction sample are determined differently according to a current block shape. [8] 8. A procedure for the codification of a video, comprising the procedure: derive a reference sample for a current block; determine a mode of intra-prediction of the current block; Y obtain a prediction sample for the current block using the reference sample and the intraprediction mode, wherein, if the intra-prediction mode is the flat mode, the prediction sample is generated based on a first prediction sample generated using at least one of the reference samples, located in the same horizontal line as an objective sample of prediction and a second prediction sample, generated using at least one of the reference samples, located in the same vertical line as the objective sample of prediction. [9] The method of claim 8, wherein the reference sample comprises a top reference sample and a left reference sample adjacent to the current block, wherein the first prediction sample is generated using at least one right reference sample derived on the basis of the left reference sample and the upper reference sample, and wherein the second prediction sample is generated using at least one lower reference sample derived based on the upper reference sample and the left reference sample. [10] The method of claim 9, wherein a position of the upper reference sample that is used to derive the correct reference sample or a position of the left reference sample, which is used to derive the lower reference sample. , it is determined adaptively depending on a size or shape of the current block. [11] The method of claim 9, wherein the first prediction sample is generated based on a weighted sum between the left reference sample and the right reference sample, and the second prediction sample is generated based on a sum weighted between the upper reference sample and the lower reference sample. [12] The method of claim 9, wherein several reference samples used to derive the first prediction sample or the second prediction sample are determined differently depending on a position of the prediction target sample. [13] The method of claim 8, wherein the prediction sample is generated based on a weighted sum of the first prediction sample and the second prediction sample. [14] The method of claim 13, wherein the weights for the first prediction sample and the second prediction sample are determined differently according to a current block shape. [15] 15. An apparatus for decoding a video, the apparatus comprising: an intra-prediction unit for obtaining a reference sample for a current block, for determining an intra-prediction mode of the current block, and for obtaining a prediction sample for the block current using the reference sample and the intraprediction mode, in which, if the intra-prediction mode is the flat mode, the prediction sample is generated based on a first generated prediction sample using at least one of the reference samples, located in the same horizontal line as an objective sample of prediction and a second sample of prediction generated using at least one of the reference samples, located in the same vertical line as the target sample of prediction.
类似技术:
公开号 | 公开日 | 专利标题 ES2703607B2|2021-05-13|Method and apparatus for processing video signals ES2739668B1|2021-12-03|METHOD AND APPARATUS FOR PROCESSING VIDEO SIGNALS ES2724568B2|2021-05-19|Method and apparatus for processing a video signal ES2800509B1|2021-12-21|METHOD AND APPARATUS FOR PROCESSING VIDEO SIGNALS ES2724570A2|2019-09-12|Method and apparatus for processing video signals | ES2711189A2|2019-04-30|METHOD AND APPARATUS FOR PROCESSING VIDEO SIGNALS | ES2737874B2|2020-10-16|METHOD AND APPARATUS FOR PROCESSING VIDEO SIGNAL ES2710234B1|2020-03-09|Procedure and device for processing video signals ES2711223A2|2019-04-30|Method and device for processing video signal ES2711474A2|2019-05-03|Method and device for processing video signal ES2677193B1|2019-06-19|Procedure and device to process video signals ES2699749B2|2020-07-06|Method and apparatus for processing a video signal ES2711473A2|2019-05-03|Method and apparatus for processing video signal ES2703458A2|2019-03-08|Video signal processing method and device ES2737845B2|2021-05-19|METHOD AND APPARATUS TO PROCESS VIDEO SIGNAL ES2711230A2|2019-04-30|Method and apparatus for processing video signal CA3065922A1|2019-03-14|Method and device for processing video signal CA3065490A1|2018-11-22|Video signal processing in which intra prediction is performed in units of sub-blocks partitioned from coding block ES2711209A2|2019-04-30|Method and device for processing video signal
同族专利:
公开号 | 公开日 EP3509307A4|2020-04-01| US20190200020A1|2019-06-27| EP3509307A1|2019-07-10| ES2711223R1|2020-07-01| KR20180025285A|2018-03-08| US20210112251A1|2021-04-15| CN109691112A|2019-04-26| US10841587B2|2020-11-17| US20210076041A1|2021-03-11| WO2018044089A1|2018-03-08| US20210105480A1|2021-04-08|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US20070250442A1|1998-08-31|2007-10-25|Hogan Edward J|Financial Transaction Card With Installment Loan Feature| US8799153B2|1998-08-31|2014-08-05|Mastercard International Incorporated|Systems and methods for appending supplemental payment data to a transaction message| US6315193B1|1998-08-31|2001-11-13|Mastercard International Incorporated|Financial transaction card with installment loan feature| US20100094735A1|2006-11-15|2010-04-15|Charles Reynolds|Methods and systems for automated payments| US20050049964A1|2003-01-14|2005-03-03|Winterer Mary Jo|Financial transaction card with automatic payment feature| KR102043218B1|2010-05-25|2019-11-11|엘지전자 주식회사|New planar prediction mode| WO2012005558A2|2010-07-09|2012-01-12|삼성전자 주식회사|Image interpolation method and apparatus| KR102086145B1|2010-12-13|2020-03-09|한국전자통신연구원|Method for intra prediction and apparatus thereof| US9693054B2|2010-12-22|2017-06-27|Lg Electronics Inc.|Intra prediction method and apparatus based on interpolation| US9420294B2|2011-02-23|2016-08-16|Lg Electronics Inc.|Intra-prediction method using filtering, and apparatus using the method| WO2014003421A1|2012-06-25|2014-01-03|한양대학교 산학협력단|Video encoding and decoding method| KR101587927B1|2013-06-24|2016-01-22|한양대학교 산학협력단|Method and apparatus for video coding/decoding using intra prediction| KR20170082528A|2014-11-05|2017-07-14|삼성전자주식회사|An image encoding method and apparatus or image decoding method and apparatus for performing intra prediction based on at least one sample value and at least one pattern determined in association with a block|WO2018061550A1|2016-09-28|2018-04-05|シャープ株式会社|Image decoding device and image coding device| GB2584942B|2016-12-28|2021-09-29|Arris Entpr Llc|Improved video bitstream coding| WO2019237287A1|2018-06-13|2019-12-19|华为技术有限公司|Inter-frame prediction method for video image, device, and codec| CN112204965A|2018-06-21|2021-01-08|株式会社Kt|Method and apparatus for processing video signal| KR20210035069A|2019-09-23|2021-03-31|주식회사 케이티|Method and apparatus for processing a video|
法律状态:
2019-04-30| BA2A| Patent application published|Ref document number: 2711223 Country of ref document: ES Kind code of ref document: A2 Effective date: 20190430 | 2020-07-01| EC2A| Search report published|Ref document number: 2711223 Country of ref document: ES Kind code of ref document: R1 Effective date: 20200624 |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 KR20160112128|2016-08-31| PCT/KR2017/009527|WO2018044089A1|2016-08-31|2017-08-31|Method and device for processing video signal| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|