专利摘要:
use a current image as a reference for video encoding. an example method for encoding or decoding video data includes storing, by a video encoder and in a reference image buffer, a version of a current image of the video data, including the current image in a reference image list (rpl) used to predict the current image, and encode, by the video encoder, based on the rpl, a block of video data in the current image based on a predictor block of video data included in the version of the current image stored in the reference image buffer.
公开号:BR112016023406A2
申请号:R112016023406-5
申请日:2015-03-20
公开日:2020-12-22
发明作者:Xiang Li;Chao PANG;Ying Chen;Ye-Kui Wang
申请人:Qualcomm Incorporated;
IPC主号:
专利说明:

[0001] [0001] This request claims. the benefit of the North American Provisional Paterite Application No. 61 / 969,022, filed on March 21, 2014, and two North American Provisional Patent Application No. 62 / 000,437, filed on May 19, 2014 , the tQtality of the content of each dDs requested is hereby incorporated in its entirety by reference. TECHNICAL FIELD
[0002] [0002] This description refers to video encoding. "m" FUNDAMENTALS
[0003] [0003] Digital video functionalities can be set in a varied range i: - from I devices, comprising digital televisions, digital direct diffusion systems, wireless diffusion systems, personal digital assistants (PDAs) , faptop or desktop computers, tablet computers, e-book readers, digital cameras, digital recording devices, digital media players, video game devices, video game consoles, satellite or mobile radio phones, so-called " smart phones, "teleconferencing video devices, streaming video devices, and more. The digital video devices implement video encoding techniques, CDhíO's descriptions in the standards defined by MPEG-2, MPEG-4, IT.UT h.2.63, ITU-T h.264 / MPEG-4 , part 10, Advanced Video Coding (AVC), Video Coding with High Efficiency (HEVC), currently under development, and extensions of those standards. The video devices can transmit, receive, encode, decode and / or store information i "'::' n t 'I" "J",
[0005] [0005] The spatial prediction in tÈmpQra1 results in a predictive block for a block to be coded. The residual data represent the differences of 'piXe1s' between the Qriginal block to be coded and the pre-edited block. An interlaced blaeo is encoded accordingly: a motion veto.r that points to a bIoco |! Ld.e reference samples to form the predictive beak, and residual data indicates the difference between the encoded bIopD and the predictive b1Dco. An intra-coded block is coded according to an intra-coding mode with residual data. For additional compression, the 'residual data can be transformed from the pixel domain to a domain':: '· µ " 1
[0006] [0006] In general, this invention does not describe techniques for performing intra-prediction for Uid encoding. More particularly, this invention describes examples of techniques for using the current image as a reference image that is encoding one or more blocks of the current image. For example, a current image can be used as a reference image when encoding one or more blocks of the current image using Intra Block Copy (Intrã BC).
[0007] [0007] In an example, a method of eodifying and decoding video data includes storage, by a video encoder and in a reference image buffer, a view of an ataal image of the video data; insert an indication of the current image in a reference image list (RPL) used for predicting blocks of the current image; and encode, by the video encoder, based on the RPL, a first block of video data in the current image based on a block of video data included in the version of the current image stored in the reference image buffer. In some examples, the predictor block may alternatively be referred to as a prediction bLQco- j_ L r.j
[0008] [0008] In another example, a device for encoding or decoding video data will include a reference image button configured to store all or more video data images, and one or more processors. In this example, one or more Mjj :: processors are configured to: store, in a reference image buffer, a version of a current image of the video data; insert an indication of the current image into one. Reference image (RPL) used during prediction of 't "' blocks; rt c-
[0009] [0009] In another example, a device for encoding or decoding video data includes a means to store, in a reference image buffer, a version of an image up to the present data. of video; means for inserting an indication of the current image in a reference magnet list (RPL) used during prediction of block3 of the current image; and means for encoding, for example, a video encoder, based on the RPL, a first block of video data in the current image based on a prediction block of video data included in the version of the current image. meal.
[0010] [0010] In another example, a computer-readable storage medium stores instructions that, when executed, make one or more "tpro, cease'ers of a video encoder: store, in a reference image buffer, a version of an at: bad image of the video data; insert an indication of the current image in · a reference image list (RPL) used "of the di erent prediction of blocks of the current image; and encode, based on the rpl, a main block of video data in the current image based on a b1oc0 video data predictor Included in the version of the current image stored in the reference image buffer. " 1
[0011] [0011] The details of another example of the invention are set out in the accompanying drawings and in the description below. Other features, objects and advantages of the present invention will be evident from the description and figures, and from the claims. ij; {'"T C I" TI -jlll h 5Í "91 u t: qt S, i BRIEF DESCRIPTION OF THE DRAWINGS
[0012] [0012] FIG. 1 is a block diagram that illustrates an example of video encoding and decoding system that can use the techniques3 described in this current discussion.
[0013] [0013] FIG. 2 is a diagram. conceptual that illustrates a sequence of exemplary video images, according to one or more techniques of this current division.
[0014] [0014] FIG. 3 is a block diagram showing an example of a video encoder that can use intra b copying techniques described in this current disclosure.
[0015] [0015] FIG. 4 illustrates a reexertion of angular intra-prediction modes that can be used according to one or more techniques of this current disclosure.
[0016] [0016] FIG. 5 is a block diagram illustrating an example of a video decoder 30 that can implement techniques described in this current division.
[0017] [0017] FIG. 6 is a diagram. conceptual illustration of an example of an intra-block copy (Ebc.) in accordance with one or more techniques of
[0018] [0018] fig. 17 is an Eluxo diagram that illustrates exemplary operations of a video encoder to encode a video data blockIide an image based on a predictor block included in the same image, according to one or more techniques of the current disclosure.
[0019] [0019] Fig. B is a flow diagram that uses exemplary operations of an addictive deco to decipher a video data block of an image based on a predicted block included in the same image, according to an image. or more of the techniques of the current dimension. DETAILED DESCRIPTION mr 'z "G_' rü; jy r2 C T
[0020] [0020] A video sequence is generally represented as a sequence of images. Typically, encoding techniques based on blocks are used to encode each individual image. That is, each image is divided into blocks, and each one of the blocks is coded individually. Encoding a block of video data usually .envQ1ve forming predicted values for the pixels in the block and encoding residual values. Prediction values are formed using pixel-in-one or two predictive beak samples. Residual values represent the differences' between the pixels of the. original kloco and 03 predicted pixel values. Specifically, the original video data block includes a pixel value method, and the predicted block includes a matrix of pFeditQs pixel values. Residual values represent pixel-by-pixel differences between pixel values -riginal and qs predicted pixel values.: '-
[0021] [0021] Prediction techniques for a video data block are generally categorized as intra prediction and inter-editing. Intra-prediction, or spatial prediction, does not include prediction of any reference image, instead of the block it is predefined from pixel values of previously coded, neighboring blocks. Inter-prediction, or terup-òra-l prediction, geralmeríte involves predicting the block from pixel values of one or more previously encoded reference images (for example, frames or slices) selected from one or more image lists reference (RPLS). A video encoder can include other reference image buffers configured to store the frames included in the RPLS. - L
[0022] [0022] Many applications, such as remote desktop job, remote game, wireless displays, self-motivating entertainment, C '
[0023] [0023] According to some Intra BC techniques, video encoders can use previously encoded video data, within the same image as the current video data block, which directly = directly or directly. horizontal line gmri a current block (to be coded) of video data in the same language for the prediction of the current block. In other words, if an image of video dadQs is imposed on a 2-D grid, each block of video data "would occupy a unique range of x-values and y-values, thus some codifi" Video players can predict a current block of video data based on blocks of video data previously I '| "Qodes that share only the same set of x values (this is, vertically in line with the current block ) or the same set of values of y and y (that is, hQri2only in line with the current block).
[0024] [0024] ôt'tras tecnicas l.ntra BC_i are described in Panq et al ,. "NQn-RCE3: IntEa Motion CQmpensation with 2-d .MVS," Document: 'JCTVC-N0256, JCT-VC of LTU-T SG 16 WP 3 and ISO / IEC JTC l / SC 29 / WG ll, 14 "Meeting : Vienna, AT from 25 July to 2 August 2013 (hereinafter referred to as "JCTVC- N0256"). At the JCT-VC meeting in Vienna (July 2013), Intra ".J, 1
[0026] [0026] In some examples, IntYà-BC may be an efficient coding tool, e-sBécia1mer) for the codification of screen content. Because = errrp1o, in some examples, encoding blocks using Intra BC can result in a smaller bit stream than encoding: blocks using internal or intra coding. As discussed above, Intra BC is an inter-equal coding tool (meaning that the pixel values for an image are predicted from pixel values in the image).
[0026] [0026] According to the one or W = B techniques of this invention, the) contrary to the prediction of a biography of an actual image based on samples in the current image using conventional intra prediction techniques, a video encoder can run Intra BC to predict a blvco on a current image - based on samples in the image: ãtua1, using ('"n -" -
[0027] [0027] This description describes examples of techniques related to the use of a current image as a reference image when predicting parts of the current image. In order to help with understanding, exemplary techniques are de-scribed and with respect to distribution extensions (RExt) for the video eodification standard standard High Efficiency Video Coding (HEVC), including the possible bit depth support. , (more than 8 bits), high chrominance sampling form, including 4: 4: 4 and 4: 2: 2. Techniques can also apply to the encoding of screen content. It should be understood that the techniques are not limited to stretches or screen content encoding, and may be applicable in general to video encoding techniques, including standards based or: Not based on video encoding standards. In addition, the techniques described in this description may become part of patterns developed in the future. In DuLras palaÜEas, the techniques _1
[0028] [0028] Fig. 1 is a block diagram illustrating an exemplary video encoding and decoding system 10 that can implement the techniques of this description. As shown in FIG. 1, the system 10 comprises a source device 12 which provides encoded video data to be re-encoded at a later time by a destination device 14. In particular, source device 12 provides video data to destination device 14 via a computer-readable medium 16. The source device 12 and the destination device 14 can comprise any of a wide range of devices, including desktop computers, laptops (Iaptop), computers tablet, set-top box, telephone equipment such as so-called "Smart Phones", so-called "smart pads", televisions, cameras, '' display devices, digital reproduction devices, video game consoles, contiguous streaming devices video, or other, q similar devices. In some cases, the source device 12 and the destination device 14 can be equipped for convenient communication.
[0029] [0029] The destination device 14 can receive the video data encoded to be decoded via a computer-readable medium 16. The computer-readable medium 16 can comprise any type of medium or cover device. move qs | encoded video data from the source device 12 to the destination device 14. In an example, the lawful device per broker 16 may comprise a half of the application to 'enable i2 / 91% mL «Source device 12 to transmit encrypted video data directly to (j destination device 14 in real time. The encrypted video data can be modulated according to a communication standard, such as a wireless communication protocol, and transmitted to the destination device 14. The communication medium may comprise any wireless communication means wired, such as a radio frequency (RF) spectrum or one or more physical transmission lines. communication form part of a packet-based network, such as a local area network, an extended area network, or a g1oba1 area network such as the Inter-net. The communication medium may comprise routers, switches, base stations, or any other equipment that may be useful to facilitate communication from the source device 12 to the destination device 14. [0030 '] In some instances, encoded data may be output output interface 22 to a storage device. In a similar way, the encrypted data can be accessed from the storage device via an input interface. The storage device can comprise q, uaiq, uer one of a variety of data storage media distributed Or accessed locally as a disc r = dç) g Blu-ray discs, DVDS, CD-ROMS, flash memory, volatile memory raw non-volatile or any other digital storage medium suitable for storing encoded video data. In another example, the storage device may correspond to a file server ow'a other intermediate storage device that can store q encoded video generated by diQpo.Qítivo de 12. The destination device 14 can access video data stored in the storage device via continuous flow or r (- II
[0032] [0032] The techniques of this description are necessarily limited to applications II or configurations without Eiò. Techniques can be applied to encode video in support of any one of a variety of multimedia applications, such as television broadcasts over the air, cable television broadcasts, satellite television broadcasts, etc. video streaming by Internet, [: | cIta1 as dynamic adaptive continuous flow under: re HTTP (DÀSH), digit addiction 'which is encoded in 1iiti data storage means, decoding of stored digital video and et a means of data storage, or other applications. With some examples, system 10 can be configured to support u-nidirec video transmission: ion or: hídirecío.na1 to support applications such as continuous video streaming, video playback, video broadcasting and / or telephony.
[0033] [0033] In the example of Fig. 1, a source device 12 comprises video source 18, video encoder 20 and output interface 2'2. The target device 14 comprises the input interface 28, the video decoder 30 and the display device 31. According to this description, the video encoder 20 of the source device 12 can be configured to apply. the techniques for performing video conversion in video encoding. In other examples, an o: rigem device and a target device may comprise other components in arrangements. For example, the source device 12 can receive video data from an external IB video source, such as an external camera. Likewise, the "destination device 14" can make the interface with an external display device, instead of including an integrated display device. "
[0034] [0034] System 10 illustrated in Qâ Fig. L is just an example. Techniques for the improvement of the intra-block copy signaling in the video coding can be performed with any video coding and / or a decoding device. Although the techniques in this description are generally performed by a video coding or deco-coding device, techniques can also be performed by a video codec & In addition, the techniques in this description can also be performed by a video preprocessor. The "L2 source device" and the destination device 14 are just examples of such encoding devices where the source device 12 generates encoded video data for transmission to the destination device 14. In some instances, the devices 12, 14 can operate - from a substantially symmetrical fortress in a way that is similar to one of the discs · itivQs 12, 14 comprises co-limitation component and Lt -I! I,
[0035] [0035] The video source 18 of the source device 12 may comprise a video capture device, such as a video camera, a video file containing a previously captured video and / or a video feed interface for receiving the video from a video content provider. As an additional alternative, the video source 18 can generate "aadQs based on computer graphics according to the source video", or a combination of live video, archived video and computer generated video. the video source 18 As a video camera, the source device 12 and the destination device 14 can form so-called camera or video phones: "Camn mentioned above, however, the techniques described in this description can be applicable to Ntdeo coding in general and can be applied in wireless applications. In each case, the captured, pre-captured or computer generated video can be encoded by the video encoder
[0036] [0036] The '1-level' computer medium 16 may comprise a transient medium, including: wireless diffusion transmission or a wire network, or a storage medium (that is, a non-transitory medium). such as a hard disk, flash drive, compact disk (CD), digital video disk (dvd), Blu-ray disc or other Ie.gív'el media per camputer. Then "some examples, 'a network server (not shown) can receive encoded video data from the source device 12 and provide the encoded video data to the destination device 14, for example, via transmission simij-ar, a computing device from a media production installation, such as a disk-based installation, can receive video data encoded from the source device 12 and produce a disc containing q encoded video data, so the computer-readable medium 16 can be understood as comprising one or more computer-readable medium in various ways, in various examples. '[OÔ37] The input interface 28 of the target device 14 receives information from computer-readable medium 16 or storage device 32. Computer-readable medium information 16 or from storage device 32 may comprise syntax information defined by the video encoder 20, which is also used by the video codec 30, which includes elements of syntax that describe the characteristics and / or processing of blocks and other encoded unities, for example, GOPS.'The display device 31 displays the data of video decoded for a user, and can comprise any of a variety of display devices, such as a cathode ray tube (CRT), a liquid crystal display (LCD), a plasma monitor, a plasma emitting diode monitor organic light (OLED) or other type of e, display device [OCl38] 2Q "video encoder and o. video dectodifier 30 can each be implemented as any one of a variety of suitable encoder circuits, such as one or more microprocessors, digital signal processors (DSPS), integrated circuits of specific application, ica "II ;,
[0039] [0039] Although .not shown in F: ig. 1, in some respects, each between c) video encoder. and 20) video decoder 30 can be 'built in with an' audio e 'encoder and decoder. They can also comprise suitable MUX-DEMUX units, or other hardware and software, to handle both audio and video encoding. 'in a common data stream or separate data flows.
[0040] [0040] This disclosure can generally refer to video encoder 20 "3" signaling "certain information to another device, such as video decoder 30. It must be understood, however, that the encoder is.
[0041] [0041] The video encoder 20 and video decoder 30 can operate according to a video comp. Standard, such as the HEVC standard. Although the techniques of this invention are not limited to any particular encoding standard, the techniques can be rejuvenating for the HEVC standard, and particularly for extensions of the HEVC standard, such as the extension RExt jf The standardization efforts. of HEVC are based on a model of evolution of a video coder referred to as Qí Mctdeío de HEVC Test (HM). The HM assumes a number of additional features of video encoders in relation to the 7 existing di j cultives in accordance with, for example, ITU-T H. 264 / AVC. For example, while H. 264 offers nine intra-p-editing coding modes, HM can forrieCer up to thirty-three intra-prediction coding modes.à i_-2
[0042] [0042] In general, the model of Ul Lr'work of the HM describes that a video image can be debited in a sequence of treeblocks (blocks, trees) that in larger encoding units (LCU) that include 'asanto arncístras de r "ai _1 is luminance and chrominaneia. Syntax of data within a continuous flow of cycles can' define a size for the LCU, which is a larger coding unit in terms of number of pixels A slice cQm comprises a number of consecutive coding tree units (CTU) Each of the CTUS may comprise a luminance sample coding tree b1ocD, corresponding coding tree dMs kj1o'cQs of the 'chrominance samples, and syntax structures used to encode bLQcQs samples from the coding tree In a monochromatic image or an image that has three distinct color planes, a CTU can co-understand | _a single · coding tree block and structures. of si_ntaxe used for cod the samples of the block "'of the coding tree. H
[0043] [0043] A video image can be divided into one or more slices. Each treeb1oQk i (tree h1oc'o) can be divided into coding units (Cüs) according to a quadtree. In general, a quadtree data structure includes a node per CU,: with a root node and corresponding to treebloc-k. If a CU is due in four sub-CUs, the node corresponding to the CU includes four leaf nodes, each of which corresponds to one of the sub-CUs. A CU can comprise a coding block of luminance samples and two corresponding coding blocks of chrominance samples from an image that has an aInQstra -Cb matrix, a aInQstra -cb matrix and a C · r | £ sample matrix. and syntax structures used to encode the samples of the coding blocks. In a monochrome image "0Ü an image that has three distinct color planes, a CUc can comprise a single coding block and syntax structures used to encode the bloc hollow samples & dQ L
[0044] [0044] Each node in the qijadtree data structure can provide syntax data for the corresponding CU. For example, a node in the quadtree may comprise a split indicator, indicating whether the CU corresponding to the node is divided into sub-CUs. Syntax elements for a .CU can be defined recursively and may depend on whether the CU is divided into sub-CW. If a CU is no longer in debt it is referred to as a cU-fojha. In this description, four sub-CUs of a cU-leaf will also be referred to as CUs-leaf even if there is no explicit division of the original CU-leaf. For example, if a CU the size of J6xl6 is not more divided, the four sub-Cljs 8x8 will also be referred to as CUs-fo1ha although CU 16xl6 has never been in doubt.
[0045] [0045] A CU has a purpose similar to an H.264 macroblock, except that a CU does not have a size distinction. For example, a treeblock can be divided into four child nodes (also referred to as sub-CUS) and each child can, in turn, be a parent node and be indebted to another four child nodes. A final final identifiable node referred to as a quadtree leaf node (cc) comprises a coding node, also referred to as a CU leaf. The syntax data associated with an encoded derbite stream can define the maximum number of times a treeblock can be divided, referred to as a maximum CU depth, and can also define a minimum size of the encoding nodes. Therefore, a stream of bits can also define the lowest encoding unit (SCUi) 'a This description uses the term "block" to refer to any one of a CU, PU or TU, in the context of HEVC, "or structures of 'similar data in the context of other patterns (for example, macroblocks and subbloos of the same in H, 264ÈAVC).} J
[0046] [0046] A CU includes a coding node and prediction units (PUS) and transform units (TUS) associated with the coding node Uitl CU size corresponds to a coding node size and C (J must have a square format. CU size can range from 8X8 pixels to treeblock size with a maximum of 64x64 pixels or larger. Each CU can contain one or more PUS and one or more TUs.
[0047] [0047] In general, a PU represents a spatial area corresponding to the quality or a part of the corresponding CU and can comprise data to retrieve a reference sample for the PU. In addition, a PU comprises data related to prediction. For example, when the PU is encrypted by intra-node, data for the PU can be included in a residual quadtree (RQT), which can comprise data describing an intradiction prediction for a corresponding TU. to PU. As another example, when the PU is coded inter-mode, the PU can comprise data that define one or more motion vectors for the PU. A prediction block can be a rectangular block (that is, square or non-square). square) of samples to which the same prediction is applied. A BU of a CU can comprise a luminance sample prediction block, two prediction blocks corresponding to the chrominance samples of an image, and syntax structures used to predict the sample samples. prediction block In a monochrome image or an image that has three distinctly colored pIans, a PU can comprise 'a single prediction block and syntax structures used'_to predict the prediction block samples. m
[0048] [0048] The TUS can comprise: efficient in the domain of transforming after the application of a transform, for example, a discrete transforming of iri i 1
[0050] [0050] The video coder 20 can scan the transform coefficients, giving a small vector of the bidimension matrix, including qs r "
[0051] [0051] After scanning the quantized transform coefficients to form a one-dimensional vector, the video encoder 20 can entropy the one-dimensional vector by entropy, for example, according to the variable length coding adaptable to the context (CAVLC), binary arithmetic codification adaptive to the context (CABAC), binary arithmetic coding adaptive to context based on "syntax (SBAC), coding by entropy of probability interval partitioning (PIPR) or other coding methodology video encoder 20 can also encode entropy q elements of $ ir) rate associated with q encoded video data "" for use by video decoder Q 30 to decode video data.
[0052] [0052] The video encoder 20 can also send syntax data, such as block-based syntax data, image-based syntax data and syntax data based on the group of iqa.ns, (GOP), for q video decoder 30, for example, in a picture header, a block header, a slice header or a GOP header. GOP syntax data can i_
[0054] [0054] The video decoder 30, after obtaining the encoded video data, can perform a reciprocal decoding pass generally to the encoding pass described in relation to the video encoder
[0054] [0054] The 2Q video encoder and the video decoder 30 can perform insertion and inter-encoding of video blocks within video slices. Intra-coding is based on I: .Ê} spatial rediçàQ to reduce or remove red and yellow sé} ac1 in the video within a given frame of video or image. Inter-coding is based on the temporal prediction or inter-prediction prediction to reduce or eliminate redundancy - * temporal on video within current images of a video sequence or to reduce or eliminate redundancy with video in other points of view .Intra-mode (I mode) can refer to any of the various compression modes with spatial basis. The inter "modes, such as uniçíirecion'aí prediction (P mode) or bi-prediction (B mode), p .can refer to any of the 'various compression modes: com. time base. [0'055J In some examples, 'such as when encoding the screen content, the video encoder 20
[0056] [0056] In accordance with one or more techniques of the present invention, as opposed to using Intra BC to predict a block of an actual image using conventional intra prediction techniques, the video code 20 and / or c) video decoder 30 can perform Intra BC using techniques similar to conventional conventional pre-editing. for example, video encoder 20 and / or video decoder 30 can insert the current image indication into a reference image list (RPL) used to predict blocks of the current image, store a b- "[" T
[0057] [0057] FIG. 2 is a conceptual diagram illustrating an example of video sequence 33.1. which includes images 34, 35A, 36A, 38A, 35B, 36B, 38B, -t and 35C, in order of display. One or more of these images may include slices-È ', slices-8, cju slices-l. In some cameras, the video sequence 33 can be referred to: I am not a group of images (GOP). Image 39 is a first image in order of display for a sequence that occurs after video sequence 33. FIG. 2 generally represents an exemplary prediction structure of a video sequence and is only intended to illustrate the image references used to encode different types of inter-predicted slices. A real video sequence can contain more or less video images that include different types of ideas and in a different display order.
[0058] [0058] For block-based video encoding, each of the video images included in the video sequence 33 can be divided into video blocks or encoding units (UCS). Each CU "of a video image can include one or more prediction units (PUS). In some examples, the prediction methods available for
[0059] [0059] The video blocks of a P-slice can be encoded using unidirectional predictive encoding of an identified reference image - in a list of reference images. The video blocks of a B-tactic can be encoded using the bi-directional predictive encoding of the multiple images - of reference identified in reference List of many imagdm of reference.
[0060] [0060] In the example of FIG. 2, p: Umeira image-m 34 is designed for inter-mode coding with a 1-image. In other examples, the first image 34 can be encoded with inter-mode coding, for example, as a P-image or B-image, with reference to a first image of a previous sequence. Video images 35A to 35C (collectively, "35 video images" ') are designated for encoding as B-images using bi-prediction. With reference to an image from the past.': E ' a picture of the future. As shown in the example in- FIG. 2, image 35 can be encoded as a magnet — B "with reference to the first image 34 and image 36A, as indicated by: jj; · 2 _1.
[0061] [0061] Video languages 36A to 36B (collectively, "video images 36") can be designated 1 for encoding as P-images or B-images, using unidirectional prediction with reference to an image from the past. As shown in the example of FIG. 2, image 36A is encoded as a P-image or B-image, with reference to the first image: m 34, as indicated by the arrow in image 34 to video image 36A4 Image 36B is similarly encoded as a mP image, or imageIn-B, with reference to Figure 38A, as the arrow 38A of the image for video image 36B. sl
[0062] [0062] Video images 38A to 38B (collectively "video images 38") can be designated for encoding as P-CjÚ-tt "B-images", with unidirectional prediction with reference to the same injection of the past. As illustrated in the example in Fig. 2, image 38A is encoded with two references to Image 36A, as indicated by the two image arrows 36A there: video image 38A Image 38B is similarly encoded.
[0063] [0063] In some examples, each image "" can be assigned a unique value (FStO is unique for a particular video sequence, for example, a sequence of images in the den-sequence. instant decoder (TEDR) in decoding order) which indicates the order in which images are to be output. This unique value can be referred to as the image order count (POC). In some instances, the order in which the images are to be accurate may be Li i. "
[0064] [0064] In accordance with one or more techniques of this invention, a video encoder (for example, a video encoder 20 or a video decoder 30) can execute Intra BC by inserting a current image in a reference image list (RPL) used to predict blocks in the current image. For example, in the example of FIG. 2, a video encoder can insert an image indication 35A, together with image indications 34 and image 3 (iiÀ, in RPLS used for) to precede blocks in image 35A. The video encoder can then use the 35A image as a reference image when coding blocks of the 35A image. G
[0065] [0065] FIG. 3 is a 'block diagram' illustrating an example of a video encoder 20, which 'can use block copying techniques described in the present invention. A videcA20 encoder will be described in the context of HEVC coding for fineness of i-gloss but without limitation of the present invention as with other coding standards. In addition, the video encoder 20 can be configured to implement "tar !, q the techniques according to the extension of the fai: HEVC screen .- 'w
[0066] [0066] The video coder Q 20 can perform intr'a and inter-eodification of video blocks in video slices. Intra-coding is based on spatial prediction to reduce or eliminate spatial redundancy in video within a given image :! video. Inter-coding is based on temporary prediction, with inter-view prediction to reduce or eliminate video time redundancy within adjacent images in a sequence of 7 r 30/91 r
[0067] [0067] In the example of fig. 3, video encoder 20 can include video data memory 40, prediction processing unit 42, reference image memory 64, adder 50, transform processing unit 52, quantization processing unit 54 and unit entropy coding unit 56. The prediction processing unit 42, in turn, includes the motion estimation unit 44, a motion compensation unit 46, and the intra prediction unit 48. video bloms reconstruction, the video encoder 20 also includes reverse quantization processing unit 58, reverse transform processing unit 60, and adder 62. An unblocking filter (not shown in FIG. 3) it could also be i1jcl1AídQ to filter block boundaries to remove blocking artifacts from reconstructed video. If desired, the deblocking filter, which typically filters the output of the somadàÊ 62. Additional loop filters (in a loop or in a loop), can also be used in addition to the de-flow filter.'j
[0068] [0068] The video data memory 40 can store q video data to be encoded by the components of a video encoder'20. The video data stored in the video data memory 40 can be obtained, for example, from the source: video 18. The reference image memory 64 is "= example of a decoding image buffer (DPB) v which stores video reference data for use in encoding video data by a video encoder20 (for example, in intra- or inter-coding modes, also referred to as intra- or inter-prediction modes). video data memory 40 and reference image memory 64 "Ij '
[0069] [0069] During the e-modification process, a video encoder 20 receives a video image or a slice to be encoded. The image or a 'slice can be divided into multiple' video blocks: The motion estimation unit 44 and the motion compensation unit 46 perform inter-predictive coding of the received video block relative to a set ' more blocks in a c) ii more reference images for Eo: rhecer temporal compression cju provide inter-view compression The intra-prediction tint 48 can alternatively perform intra-predictive encoding of the received video block in relation to one or more neighboring blocks of "the same image or slice as the block to be encoded" to provide spatial compression. Video encoder 20 can perform multiple encoding passes (for example, to select an appropriate encoding mode for each block of video data) - " C
[0070] [0070] Beyond. therefore, the dé: / partitioning unit (not shown) can share blocks of: sub-block video dadQs, based on the assessment: of previous partitioning schemes in previous encoding passages. for example, the urity of paFucionamentQ "pade '" IF W
[0071] [0071] The prediction processing unit 40 can select one of the coding units, intra or inter, for example, based on error results and supply the intra or inter coded block as a result of the adder 50 to generate three blocks 'residuals and Adder 62 to reconstruct the coded block for use: CO1YlO a frame of reference. The prediction processing unit 40 also provides syntax-elements, such as motion vectors, intra-mode indicated '' ", partitioning information and other syntax information, for the entropy coding unit 56.
[0072] [0072] The motion estimation unit 44 and the motion compensation unit 46 can be highly integrated, but are illustrated separately for conceptual purposes. The motion estimation performed by the motion estimation unit 44 is the process for generating motion vectors, which estimate IMDvime.nto for the video blocks. A motion vector, IIpQr example, may indicate the displacement of a PU of = ± b1oc! Q of videoQ within a video image or current image in relation to a predictive block within a frame of reference (or another unit coded) in relation to the 'current block being encoded within the current image (or another coded unit). A predictive block is a hlbcü that is closer to the block to be coded, in qs of difference "1 '1I
[0073] [0073] The mcATímentQ 44 estimation unit calculates a movement rate for an EIU of a video block in an inter-encoded slice, 'by taking the position of the E'U and the position of a predictive block of an image from reference. The reference image can be selected from one or more reference image lists (RPLS) that identify an additional reference image stored in the reference image memory '64. The motion estimation unit 44 sends the calculated motion vector to the entropy coding unit 56 and motion compensation unit 46. In some instances, the motion estimation unit 44 can send a motion image indication. erehcia selected for the input coding unit 56.
[0074] [0074] In some instances, the motion estimation unit 44 may generate one or more RPbs. for example, the motion estimation unit 44: can generate a first RPL (List 0) that can include images of video data that are before the image at: ual emmma output order m
[0075] [0075] In some examples, the mQvimentQ 44 estimation unit may generate one or more RELS based on one or more sets of reference images (ÁRPSs). One or more RPSS may include: 1) one or more short-term image sets that are available to predict the current image and are before the current image in the exit order, referred to as a short-term RPS. or RefPicS, e-tStCurrBefore, 2) a set of short-term images that are available to predict the image-m adj-ual and are after the current image. in the order of output, designated as short-term RPS afterwards or RefPicSetStCurrAfter, 3) a set of images for short pra.ZD that is not dii.spc) level 1 to predict the current imageA7, but can be used to encode image subsequent in a coding order, called short-term RPS not available or R.efPicSetStFoll, 4) a set of long-term images that is available to predict the current image, referred to as available long-term RPS or RefÉ ' icSetLtCurr, and / or 5) a set of long-term images that is not available to predict the current image, -I -.
[0076] [0076] In some examples, the motion estimation unit 44 may include images in one or more RPSS - in one or more RFLs in a particular order. For example, to generate an RPL, the motion estimation unit 44 may first include imagers in the short-term RPSS: which are available to predict the current image ('for example, RefPicSetStCurrBefore and / or RefP'icSet.StCurrAfter) followed by long-term RPSS images that are available to predict the current image (for example, RefPicSetLtCurr). In some examples, each entry in an RPL may have an index value such that images included in the previous RPL (for example, images in short-term RPSs) have lower index values than images included in the most recent RPL. (for example, long-term RPSS images).
[0077] [0077] As discussed above, 'motion estimation unit 44 can send a selected reference image indication to, - entropy coding unit 56. In some instances, motion estimation unit 44 can send motion. , sending the index value of the selected reference image within the RPL. _L_Í "'
[0078] [0078] In some instances, the motion estimation unit 44 may emit information to activate a video decoder to predict an o11 plus current RPLs from one or more previous RPLS, For example, the rrloviment estimation unit 44 can output one or more elements3 of syntax that allow a video decoder to modify one or more RPLs to a previous slice to generate a more RPLS quiz for a current slice. I, J_I 3'.li] Zc: m
[0079] [0079] In accordance with one or more of the techniques of the present invention, as opposed to limiting the intention of using other images as images of. reference,
[0080] [0080] The prediction proc) unit 42 can generate either: one or more RPLS for the current image. For example, the prediction processing unit 42 can inject the current image from an RPL to the current image. In some examples, the prediction projecting unit 42 may include the current language at a specific location within the RPL. As an example, the 'Jl' processing unit
[0081] [0081] With another example, the prediction processing unit 42, you can insert the current image in the RPL after inserting images from a long-term RPS. For example, the prediction processing unit 42, can insert the current image into the RPL with an index value greater than the image index values in a long-term RPS. In some cases, the motion estimation unit 44 can display the current image in the RPL directly after inserting images from a long-term RFS. I- f
[0082] [0082] As another example, the prediction processing unit 42 can insert the current image in the RPL at a fixed position. For example, the print processing unit 42 can insert the current image into the RPL with a fixed index value. In some examples, the index value can be _fixed -1, which in a ref idx .11 active minusl + l. In some of the examples, the motion stripping unit 44 can- not encode an indicator (fLag) which indicates that the current block is encoded using Intra BC (ie, inEra'BC fíag).
[0083] [0083] In some examples, such as motion vectors where it is predicted using the time motion vector prediction (TMVP), the motion estimation unit 44 may apply one or more responses so as to that the current image is not used as the image placed by "_F 1_
[0084] [0084] As discussed above, when encoding a block of video data from a current image - from video data, the motion estimation unit 44 may select a predictive block that most closely matches the current bioco. In accordance with one or more of the techniques of the present invention, in contrast to '(' or in addition to) searching for blocks of other images, the motion estimation utility 44 can select a block located in the current image for use as a predictive block. for the current block of the current image, For example, the motion control unit 44 can search the images including one or more images. reference, including the atiial image. For each imageIn, the motion estimation unit 44 can match the search results that reflect how well a "predictive block corresponds to the current block, for example, using pixel by pixel sum of absolute differences" " "- (SAD), sQma of quadratic differences (SSD), mean absolute difference (MAD), mean quadratic difference (MS'D), similar cju · tros similar. Then, the motion estimation unit 44 can identify a block of an image with the corresponding meLMr for the current bl-o and, · and indicate the position of the block and the image (which may be the current image) for the pEditation processing unit 42. In this way, the motion estimation unit "A4 can execute Indra BC, for example, when the motion estimation unit 44 determines that a predictive EQqco is included I 0 '. ti 39/91 # 0 in the current image, this is, the same image as c) current beak being predicted.
[0085] [0085] In some examples, the motion estimation unit 44 can restrict the search to the predictive block in the current image. For example, when the current block is located e-in a current slice, motion estimation unit 44 can restrict the search to the predictive block to previously coded areas of the current slice (for example, areas above and / or to the left the current block in the current slice). As such, in some examples, previously coded areas from previous slices of the current imagew may not be used as predictive blocks for performing Intra BC. However, as discussed in some examples, previously coded areas of previous slices of the current image can be used as predictive blocks for the Intra BC realization. In some instances, the "motion unit 44" estimation unit may restrict the search to the pre -ditive block using similar restrictions, such as motion search within interim reference images (for example, as the search range. estimation of performance).
[0086] [0086] In some examples, I "L- the prediction processing unit 42 po, from fact to an entropy coding unit 56 encode a Dj1 plus syntax elements to indicate an image. .atual can be present in an rpl used to predict the current image.
[0087] [0087] As another example, the prediction processing unit 42 can make the entropy coding unit 56 encode multiple syntax xe elements to indicate whether the current image may be present in an RPL used to predict the current image. For example, the prediction processing unit 42 can make the entropy coding unit 56 be able to encode a first syntax element that indicates whether the images of the video data may be present in RPLs used to predict the respective images of video data (that is, used to predict themselves).
[0088] [0088] In some examples, the prediction processing unit 42 may not make the entropy coding unit 56 encode a syntax element that indicates whether a block is encoded using an intra-block copy (Intra BC). For example, the prediction processing unit 42 may not make the entropy coding unit 56 intra_bc_f1ag in the syritax of the block coding unit that are predicted using Intra BC in accordance with the techniques of this invention.
[0089] [0089] In some examples, in addition to blocking Bs-slice and Ps-slice using Intra BC, the prediction processing unit 42 may comprise one or more RPLS that include the current image to match blocks of a slice —L of the current image. In some ..examples, it can be assumed that one more RPLS includes only the current image. In some examples, the prediction processing unit 42 can make the ipor entropy coding unit 56 encode a syrithx element to indicate whether the current image can be used as a reference image for libraries included in the current image. In some examples, 'the prediction processing level 42 can' cause the entropy coding unit 56 Fnc 'to include the syntax element in a VPS referred to by the current image, an SP.S referred to by the current image, a PP'S reference -do by the current image, or a slice header from a current -latio-l. In some instances, the prediction processing unit 42 may still use one or both of AMVP and undirect. In some instances, the prediction processing unit 42 may then make the entropy coding unit 56 to signal the target reference index for AMVP to the 1-slice in such a way that a decoding decrypts it. q target reference index with a fixed value, for example, Q. Em! 'N e] I tl>! 42/91
[0090] [0090] As discussed above, the images stored by the reference image memory 64 can be marked as' short-term, long-term, another mark, and / or ijÂq are marked. In some instances, such as when the current slice is a 1-slice and Intra BC is enabled, the prediction processing unit 42 can mark the current image either as a short-term or long-term OLl. In some instances, such as when the current slice is a 1-slice and Intra BC is enabled, "the prediction processing unit 42 may not display the current image either long-term or short-term.
[0091] [0091] In some examples, '.' the prediction processing unit 42 can "" mark the current image as long-term before "checking" the current image and mark the current image as "long term" after encoding the current image. In some of the examples, such as when the current slice is one. In fact, the prediction processing unit 42 can generate a list of merger candidates with "candidates" who refer merely to an Intra BC reference (referred to as "Long-term") or other candidates who "refer to Interferences (marked as" short-term "). In this way, the prediction processing unit 42 can generate a list of candidates to contain both Intra BC candidates and 'I-nQrmais' candidates (Inter-prediction). In some examples, the unit ": de prç) çessamentQ de p'r.editão 42 can predict from fQrmd: zbidireciorIal a fusion candidate from an Intra BC e. Μµ l [7 ——P
[0092] [0092] In some examples, such as when Temporal motion vector (TMVP) prediction is enabled and Intra BC is activated by assuming that the current frame is an additional reference image, the prediction processing unit 42 can make the unit entropy coding 56 to encode a syntax element that indicates whether the target fusion reference image is a. current image or a first image of a RE'L (for example, either RefPicList0 [O] or RefPicList1 [O]). In some examples, the prediction processing unit 42 can identify the reference image by derivation or signaling a target index for a long term category (Intra bc) of references Qu a short term category of references, and applying a different target fusion index based on the reference image category of the co-located block. For example, the prediction processing unit 42 can identify the reference image by 'signaling or deriving a target index by a long-term category (Intra BC) of references Cjlj a short-term category of references, and based on the category of the reference image of the co-located block, a different fusion target is applied. ,
[0093] [0093] As discussed above, 'prediction processing unit 42 can determine a motion vector that represents a distance between the current video data block and the prediction block for "video data, and output the determined vector for the I rr "" coding unit
[0094] [0094] In some examples, ": 'the prediction processing unit 42 can determine the motion vector with different precision levels. For example, the prediction processing unit 42 can determine the motion vector with integer precision, standard precision, or motion accuracy plus it'i_no (for example, 1/4 pixel ("pel") accuracy in HEVC). In some examples, the prediction processing unit 42 ". can code an element syntax that indicates the precision of the encoded motion vectors Íntra BC, for example, in a month or VPS referred to by current image. 'In' some examples, the accuracy of the Intra BC motion vectors can be adaptive at the image level, and the pre-edition processing unit 42 can make the entropy unit 56 encode one. syntax element indicating the price of Intra BC motion vectors encoded, for example, in a PPS or uwa fati.a referred to by the current block.
[0095] [0095] In some examples, the prediction processing unit 42 may perform one or more operations to compensate for the accuracy level of the Intra BC motion vetts. As an example, before storing blocks in the reference image memory 64, the prediction processing unit 42 may shift components x and y to the left of each block's motion vector encoded with Intra BC, such as by 2, when the accuracy is best. is 1/4-pel, Dü by any other means of rounding, and as +/- 2 after deviation to the left. As an example, when the encoding of a current slice with the Intra BC motion vectors having an integer precision, the prediction processing unit 42 can process a co-located image in a way that the motion vector of each Intra BC encoded block is shifted to the right, such as by 2, when the best precision is 1/4-pel. In some examples, such as when a current slice is encoded with Intra BC motion vectors with the finer movement accuracy, the prediction processing unit 42 may not apply the above deviation process to the motion vector right.
[0096] [0096] In some examples, where | .the current image is marked as a 'long-term reference image, the prediction processing unit 42i "-can even use noroMis long-term reference images to predict the total image. To avoid a motion vector referring to a normal long-term reference image and the motion vector referring to a current picture of predicting each other during fusion or AMVP, the prediction pre-processing unit 42 you can distinguish a long-term reference image and the normal image during the merger or AMVP process. For example, the prediction processing unit 42 can distinguish an image'm from t
[0097] [0097] Motion compensation, performed by motion compensation unit 46, may involve seeking to generate c) predictive bloc based on the motion vector determined by the motion estimation unit 44. More than once, the motion estimation unit 44 and the motion density unit 46 can be functionally integrated, in some examples. Upon receiving the motion veto.r for the PU of the current block, the moUijnentQ 46 compensation unit can locate the predictive block for which the portions of the motion vector are in one of the reference image lists (RPLS). The adder 50 forms a residual video block, subtracting the pixel values of the predictive block Êiartír from the pixel vectors of the current block to be encoded, forming the pixel difference values, as shown below. Generally speaking, the motion estimation unit 44 performs motion estimation in relation to the lighting components, and the motion compensation unit 46 uses motion vectors calculated based on the luminance components for both components. chrominance and luminance components. The prediction processing unit 42 can also generate syntax elements associated with the video blocks and the video slice for use by the video decoder 30 to decode the video blocks of the video slice.
[0098] [0098] The intra-prediction unit 48 can intraday a current block, as an alternative to the inter-reclination performed by the motion estimation unit 44 and the motion compensation unit 46, as µ -1
[0099] [0099] For example, the intra-prediction unit 48 can calculate the rate-distortion values, using an analysis of the rate-distortion for the various tested intra-prediction modes, and select Ck intra-prediction mode having the best rate-distortion characteristics between basketed modes. The rate-cytistortion analysis generally determines a quantity of distortion (or error) between a coded block and an original, non-coded block that has been coded to produce the coded block, well within a rate bits (this is, one.
[0100] [0100] In some examples, the intra-prediction mode available for us per intra-prediction unit 48 may include a planar prediction mode, a DC prediction mode, and one or more angular prediction modes. Regardless of the selected mode, the intra-prediction unit 48 can always predict a current block based on. reconstructed blocks adjacent to the current block.
[0101] [0101] As another example, when using the DC prediction mode, the intrapredicting unit 48 can predict samples from a current block with a constant value. In some examples, the constant value may represent an average of samples in the left neighboring block and samples in the upper neighboring block. As another example, when using one or more angular prediction modes, the intraprediction unit 48 can predict samples from a current block based on samples from a neighboring block. prediction direction. FIG. 4 illustrates an example of angular intra-prediction modes that can be used by the intra-prediction unit 48. The arrows illustrated in Fig. 4 represent a prediction direction (that is, that extends away from the current block).
[0102] [0102] In some examples, each of the plurality of intra-prediction modes may have a corresponding mode index, which can be "synonymous (that is, a video decoder) by the unit, e.g." intra-prediction
[0103] [0103] C) video encoder 20 forms a residual video block subtracting the prediction data from the prediction processing unit 42 from the original video block being co-located. The sound.ador 50 represents the component or components that perform this subtraction operation.
[0104] [0104] The transform processing unit 52 applies a transform, such as a discrete cosine transform (DCT) or a similarly similar transformation, for the residual block, produces ± ndü a video block composed of € Efficient values of waste transformation. The processing unit de! Transform 52 can perform other transforms that are conceptually similar to DCT. Transf3rTnada Wave1et, ç. integer transform, subband transform Qü other types of transforms can also be used. In any case, the transform processing unit 52 applies the transform to the residual block, producing a block of coefficients of forced "tesidual tran". The transformed can convert the residual information of uw, "pixel value domain into a transform domain, such as a frequency domain. : -
[0105] [0105] The transform processing unit 52 can derive the transform coefficients resulting from the quantization unit 54. The quantization processing rate 54 quantizes the coefficients to and transform in order to further reduce the bit rate . l The quantization process can reduce the depth of bits associated with some Qü with all coefficients. The degree of quantization can be modified by adjusting a quantization parameter. In some examples, the quantization prQcessment unit 54 can then perform a matrix scan including the quantized transform coefficients. Alternatively, the entropy coding unit 56 can perform the scan.
[0106] [0106] After quantization, the entropy eodification unit 56 enters the quantized transform coefficients :: For example, the entropy coding unit 56 could implement adaptive variable length coding. to the context (CAVLC), the codification of the context-adaptive binary arithmetic (CABAC), the codification of the context-based binary arithmetic (SB'AC), the codification by entropy of the probability interval of partitioning (PIPE) or other coding technique. In the case of context-based e'ntropy coding, the context can be based on neighboring blocks. After entropy encoding by entropy encoding unit 56, the encoded bit stream can be transmitted to another device (for example, video decoder 30) or archived for later retrieval or transmission. i
[0107] [0107] The inverse quantization processing unit 58 and the inverted reverse processing unit Ae 60 apply reverse quantization and inverse transformation, respectively, to reconstruct the residual block in the pixel domain, for example, for use of the pDs .terior as a reference block. _m_ 'J' -I. - '[' J (
[0108] [0108] The motion compensation unit 46 can also apply one or more interpolation filters to the reference block to calculate the sub-pixel values for use in motion estimation. The somadQr 62 adds the reconstructed residual block to "the motion compensated prediction block produced per motion compensation unit 46 to produce a reconstructed video block for storage in the reference 64 irriagew memory. The reconstructed video block can be used by motion estimation unit 44 and motion compensation unit 46 as a reference block for inter-encoding a block in the subsequent video image. In some examples, floors with Q where the current image is used as the image de- Frequency to predict the current image, c) movement of the compensation unit 46 and / or the adder 62 can update the version of the current image stored by memory d, and reference image 64 at regular intervals during the image encoding As an example, the motion compensation unit 46 and / or the adder 62 can update the version of the current image stored by the reference image memory 64, after coding stay each 'image block, current. For example, where the samples of the: current block are stored in the reference image memory 64 as initialized values, the unit of I-cQMovement compensation 46 and / or c) adder 62 can update the samples of the current image stored by m.emorta of reference image 64 co: rrt the samples reconstructed for current ç} blcmci. Lj'i
[0109] [0109] A unit of £ i1trag = DI (not shown) can carry out a variety of processes. For example, the filtration unit can eEetúar c) desb1QcageI.n. That is, the filtering unit can receive a pIurality -El ¶
[0111] [0111] While a number of different aspects and examples of the techniques are present. described in this description, the various aspects and examples of the techniques can be carried out together or separately from one of the crutrons. In other words, techniques should not be strictly limited to the various aspects and examples described -C 1l'- Rjj
[0114] [0114] FIG. 5 is a block diagram showing an example of the video decoder Q 30, which can apply techniques described in the present description. More than once, the video decoder 30 will be described in the HEVC encoding context for the purpose of illustration, but without limitation of the present description as the standard encoding. In addition, the video decoder 30 can be configured to implement the techniques according to. the track extensions.
[0114] [0114] In the example of FIG. 5, video encoding Q 30 can include video data memory 69, the 7: 0 entropy decoding unit, the prediction processing unit 71, the data processing unit 71. inverse quantization processing 76, the inverse transform processing magnetity 78, the adder 8Ü, -_ 'z: and the reference image memory 82. The prediction processing unit 71 includes the motion compensation unit 72 and the intra prediction unit 74. The video decoder 30 may, in some instances, execute L. "a reciprocal decoding pass generally to the encoding pass described in relation to the video encoder 20, from Fig. 3.
[0115] [0115] Video data merit 69 can store video data, such as a single bit of encoded video bits, parent: to be decoded "by the components of the video decoder 30. The stored video data in the video data memory 69 can be obtained, for example, from the storage device 34, from a local video source, such as: breaking a camera, through a network communication c-om or wireless data "F-" IL
[0116] [0116] Reference image memory 82 is an example of a decoded image buffer (DPB), which stores reference video data for use in decoding video data by the video decocifier 30 (for example, in intra- or inter-coding modes).
[0117] [0117] During the decoding process, the video decocifier 30 retrieves an encoded video bit stream that represents video blocks from an encoded video slice and associated syntax elements from the video encoder 20. The video unit entropy encoding 70 of video decoder '31) entrypically decodes the stream of bits to generate quantized ceficients, motion vectors or inactivators in an intra-prediction mode and other syntax elements. The entropy decoding unit 70 forwards vector ds from: J m
[0118] [0118] In some examples, when the video slice is encoded as an intra-coded slice (I), the internal prediction unit 74 could generate prediction data for a video block of the current video slice with based on a signaled intra prediction mode and previously decoded block data from the current image. In some instances, when the video image is encoded as an inter-encoded slice (ie B Qü Ê ), The mDvírrÍentQ '72 compensation unit produces predictive blc} cQ3 for a current Eatia video block with based on motion vectors and other syntax elements received from the pcle entropy decoding unit 70. Predictive blocks can be produced3 from one of the reference images within one of the reference language lists (RPLs). The prediction processing unit 71 can build the RPLs, for example, List 0 and List 1, using hase construction techniques on stored reference images in the reference image memory.
[0119] [0119] Accordingly, one or more of the techniques of the present invention, in ciposition: to limit the intent of using other images as reference images, a video decoder 30 can use an image there. camo reference image to predict the video data blocks included in the current image. For example, the prediction processing unit 71 can store a version of a current image in the prediction processing unit
[0120] [0120] As discussed above, 'the prediction processing unit "71 can generate n-ima or more RPLS for the current image. For example, the prediction processing unit 71 can include the current image'm of an RPL for the current image. In some instances, the prediction processing unit. 71 .can include the current image in a specific location within the RFL. As an example, the 7 "Át prediction processing unit) can insert the current image in the RPL before images in an RPS of 1ong "the term. For example, the prediction processing unit 71- can insert the current image in the RpL'with an index value less than index values of images from a long-term RPS In some instances, the prediction processing unit 71 may insert the current image into the RPL directly before images in a long-term RPS.
[0121] [0121] As another example, -; the prediction processing unit 71 can insert the current image into the RPL after inserting images from ck ± a long I RPS: F
[0122] [0122] As another example, 'the protection processing unit 71 can insert the current image into the RPL in a fixed position. For example, the prediction processing unit 71 can insert the current image into the RPL with a fixed index value. In some instances, the - index value can be fixed -1, or num_ref_idx_11_active_minusl + l. In some of the examples, a. prediction processing unit 71 pad, and not receive a signal indicating that the current block is encoded using Intra BC (this is, irrtra_bc_flag) from entropy decoding unit 70. [0123) In some examples, such as walk motion vectors are predicted using the temporal motion vector prediction (TMVP), the prediction processing unit 71 can apply one or more mode constraints to which the current image is not used as a self-placed image. For example, the prediction processing unit 71 can receive, from the entropy decoding unit 70, a syntax element that specifies the reference index of the placed image used for TMVP (for example, collocated_ref_íàx) in such a way . that RefPícLístX [co11ocated_ref_idx] is not the current image, cmde X is the same as collocated_from_lO_flag.c'_
[0124] [0124] As discussed above, cj: ÍLdecocLifigador 30 can decode a block of video data from a current image of video data based on a block I r "3
[0125] [0125] Here are some examples: the prediction processing unit 71 can receive, from the entropy decoding unit 7.0, one or more elements of syntax to indicate whether a current '' image may be present in an R.PL used to 'predict the current image. As an example, the prediction processing unit 71 can receive a single syntax e1emen'b0 that indicates whether images from video data can be present in RPLS used to predict themselves In some instances, the prediction processing unit 71 may receive the only element of syntax from a color) together with video parameters (VPS) to which refers to the current image, a set of sequence parameters (SPS) referred to by the current image, on a cDnjujítQ of image parameters (PPS) to which the atmal image refers.
[0126] [0126] As the other example, "" the prediction processing unit 71 can receive, from the "I" J "-JJ 't Ü r 6079i i L" .Ê + #' entropy decoding unit 70, several l-syntax elements to indicate whether the current image can be present in an RPL used to predict the current image. For example, the prediction processing unit 71 may receive a first syntax element that indicates whether the images of the video data may be present in RPLS used to predict them themselves. In some examples, the prediction processing unit 71 may receive the first syntax element of a VPS referred to by the current image, an SPS referred to by the current image, or a PPS referred to by the current image. In some examples, such as where the syntax element indicates that images from the video data may be present and I R.PLS "t used to predict themselves, the prediction processing unit 71 may receive a second syntax element qjüe indicates whether the current image of the wideo data may be present in the RPL used to predict the '' current slice. In some examples, the prediction processing unit 71 can receive the second syntax element of a slice header of the current slice.
[0127] [0127] In some examples,. the prediction processing unit 71 may not receive a syntax element indicating whether a block is encoded using Intra Block Copy (Irítra BC). For example, the prediction processing unit 71 may not receive intra_bc_flag in the syntax of the unique encoding of the blocks that are predicted using Intra BC in accordance with the techniques of this invention. i "
[0128] [0128] In some examples, in addition to block coding in B-slices and P-slices using IntreBC, the prediction processing unit 71 can consist of one or more RPLS that include the current image to decode blocks of a l-slice of the current image. In some 'examples, it may be possible to assume that the one or more RPLS only includes the current language. In some instances, the prediction processing unit 71 may receive a syntax element that indicates whether the current image can be used as a reference image of l-slices included in the current image. In some examples, the prediction processing unit 71 can decode the syntax element of a VPS referred to by the current image, an SPS referred to by the current image, a PPS referenced by the current image, or a slice header from a current Eatia-l. In some examples, the prediction processing unit 71 can still use a C) ll both AMVF and merge, In some examples, the prediction processing unit 71 may not receive the target reference index for AMVP for the encoder slice-1 and may derive the target reference index as a fixed value, for example, 0. In some instances, the processing unit of prediction 71 can receive the ref index target value for AMVP for slice-17, 'but the value of the target reference index can be restricted to a fixed value, for example, 0. jt.'
[0129] [0129] The images stored by the reference image memory 82 can be marked as short-term and long-term, another main, unaccounted, and / non-tagged. In some examples, such as when the current w '± atia is a slice-l and Intra BC is enabled -,. The. prediction processing unit 71 can mark the current image either as long-term or short-term. In some instances, such as when the current slice is a Fact.a-l and Intra BC is enabled, the prediction processing unit '71 may not mark the current image either as long-term or short-term.
[0130] [0130] In some examples, a unit of prediction µrocessing 71 pQde marc'aK "a · imag" in current L
[0131] [0131] In some examples, such as "when the time movement vector prediction (TMVP) is enabled and Intra BC is activated by assuming that the current qq ~ rQ is an additional reference frame, the unity of the process. The prediction tool 71 can receive an integer element that indicates whether the bald fusion reference image is the current image or a first image of an RPL (for example, RefPicListO [0]) RefPicList1 [0]). In some examples, the prediction processing unit 71 can identify the reference image by deriving or receiving a target index for a long-term (Intra BC) category of references with a short-term reference category, and apply. A different fusion index based on the category of L "L t F-
[0133] [0133] In some examples, 1 the prediction processing unit 71 can determine the motion vector with different levels of accuracy. For example, the post-processing unit "71-en.to determine .o - 'r q -
[0134] [0134] In some examples, "the prediction processing unit 71 can perform one or more operations to compensate for the accuracy level of the Intra BC motion vectors. As an example, before storing blocks in the image memory of reference'-q 82, the prediction processing unit 7I can shift the movement vetok 'of each block c.codeed with Intra BC to the left, such as by 2, when the best precision- is 1/4-pe1, or by any other means of rredding r, 'as + · / -2 after left shift. As another example, when encoding a current slice with Intra BC motion vectors having an integer precision , the prediction processing unit 71 can process a co-located image of a form that the growth vector of each Intra BC encoded block is shifted to: right, as per 2, the best. accuracy is lt4 € per - in some examples, such as when an active slice is encoded with hand vectors Intra BC development with the best movement precision, the processing unit of [rediction 7 I can, 1 j; i.
[0135] [0135] In some instances, where the final image is marked as a long-term reference image, the prediction processing imity 71 can still use normal long-term reference images to predict the current image. To avoid a motion vector referring to a normal long-term reference image and the motion vector referring to a current image of predicting one of the gold during fusion or AMVP, the prediction processing unit 71 can distinguish an image long term reference value .normal and the current image during the fusion process or AMVP.
[0136] [0136] The motion compensation unit 72 determines prediction information for a video .jblQco from the current video slice by analyzing motion vectors and other syntax elements, and uses prediction information for produce the credible blocks for the current block to be decoded. for example, the movement camcorder 72 uses some elements. syntax received to determine a predefined mQdó-Iue (.for example, intra or inter-prediction) used to encode the video blocks of the video slice D, LlArr type of inter-prediction slice (for example, 8 or p-slice), the information of the Qü madts of the reference image lists for the slice, movement vet'o.bes for each slice's inter-co video block, the state of inter- prediction for each block of inter-encoded video of the slice, and other information for decoding the video blocks in the current video slice. I.,
[0137] [0137] Motion compensation unit 72 can also perform interpolation with b'ase on interpolation filters. The Imovimento 72 compensation unit can be useful L
[0138] [0138] The inverse quantization processing unit 76 quantizes inversely, this is, de-quantize, the quantized transform coefficients provided m3 bit stream and decoded by the entropy decoding unit 70. The inverse quantization prQces5Q can include use of a quantization parameter QPy calculated by the video decoder 30! for each of the video blocks in the video slice to determine a degree of quantization and, at the same time, a reverse quantization graw, which must be applied . _
[0139] [0139] The reverse transform processing unit 78 applies an inverse transform, for example, an inverse DCT, an inverse integer transfer, or a similarly inverse transform process for the transfer coefficients, in order to produce residual blocks in the domain: o-: í of pixel. The video decoder 30 forms a video code decoded by the sum of the residual ips of the reverse transformation processing unit 78 "with the corresponding predictive blocks generated by the motion compensation unit 72. The adder 80 represents the component or components that perform this summation operation.
[0140] [0140] The video decoder "30 may include a filtering unit which can, in some examples, .-
[0141] [0141] While a number of different aspects and examples of the techniques are described in the present description, the various aspects and examples of the techniques can be performed together or separately from each other. In other words, the techniques should not: m be strictly limited to the various aspects and examples described above, but can be used in combination or performed together and / or separately. In addition, while certain techniques can be attributed to some of the video decoder units 30, it must be understood that one or more other video decoder units 30 may also be responsible for the realization of technical 2 .
[0142] [0142] In this way, the video decoder 30 can be configured to implement a further example techniques described in this description '.. For example, the' video decoder 30 can be configured to receive .r a data sheet that includes a syntax element indicating whether or not an image referring to a 'PPS' can be present in a reference drawing list for the image itself, for example, for the purpose of encode one or more blocks of the still image using the intra BC mode. This is, the video decoder "30 can decode a value for the syntax element which indicates that a current image can be included. Mn a list of reference images for itself. Therefore, when a block is encoded using the intra BC mode, a video decoder 30 can determine which of the "T"
[0143] [0143] fig. 6 is a diagram illustrating an example of an intra-block copying process, according to one or more techniques of this invention. According to an exemplary intra-prediction method, the video encoder 20 can select a predictor video block, for example, from a set of previously encoded blocks and constructed of video data. In the example of FIG. 6, the rebuilt region 108 includes the set of video blocks previously encoded and reconstructed. The blocs in the reconstructed region 108 can represent blocks that were decoded and reconstructed by the video decoder 30 and stored in the memory of reconstructed region 92, or blocks that were decoded and reconstructed in the C · íc1D of reconstruction of a c .rodifíeadòr of video 20r_Ie stored in the memory of reconstructed region 6.4. The "current block 102 represents a current block of video data to be eodified. The block predictor 104 represents a reconstructed block of video, in the same image, as'_b1oco current 102, which is used for the Intra BC prediction of the bj-aco current 102.
[0144] [0144] In the exemplary intra-prediction process, c) video encoder 20 can determine and encode the motion vector 106, which indicates the position of the predictor block 104 in relation to the current block 102, etri joint with q signal f 69l91% residual. For example, as illustrated by FIG. 6, c) motion vector 106 can indicate the position of the upper left corner of the predictor block 104 in relation to the upper left corner of the current block 102. As discussed above, motion vector 106 can also be referred to as a displacement vector, distance vector, or block vector (BV). The video decoder 30 can use the encoded information to decode the current block.
[0145] [0145] As an illustrative example, before decoding a current image, the video decoder 30 can initialize the reconstructed samples of the current image at 1 << (bitDepth-1). The video decoder 30 can then store a version of the current quadrQ in a reference image buffer, such as the reference image memory 82, and mark the current image as a long-term reference. The video decoder 30 can then include the current image in a reference image list (RPL) and assign a reference index (for example, IdxCur (in the ListX reference list)) to the current image in the RPL.
[0146] [0146] The video decoder 30 can decode, based on the RPL, a block of video links in the current image based on a predictor block included in the version of the current image stored in the reference image memory 82. In other For words, when decoding a block of the current image, the video decoder 30 can predict the block of the current image, that is, "- the reference with the IdxCur reference index (in Íjistx). In this case, the.
[0147] [0147] after encoding the entire image, the video decoder 30 can apply unlocking, SAO and other operations such as image marking in the same way as described in HEVC version i. In some instances, the video decoder 30 can maintain the. MV precision c) same as conventional reference images (eg quarter pixel precision) when referring to a block in the current image. In these examples, the video decoder 30 can use the interpolation filter defined in HEVC version 1. In some examples, the video decoder can use (in addition or in the place of the interpolation filter defined in HEVC version. 1) on other interpolation filters, such as bilinear interposition filter. In some examples, since when an MV is referencing a block in the current image, the video decoder 30 can restrict the accuracy of the MV to p; d = 1 integer. In some examples, (the video decoder | 3Q can perform one or more image-management techniques using the current image, as described above.
[0148] [0148] As discussed above, in some examples, video encoder 20 can encode one or more syntax elements to indicate whether a current image can be present in an RPL used to 'predict image I "í'
[0149] [0149] As discussed above, c) · video decoder 30 can build one or more reference image strips that can include the ataal image. For example, in some instances, the video decoder 30 may initiate the following process at the beginning of the decoding process for each inter-encoded slice. In some examples, such as when decoding a P slice, the video decoder 30 may construct a single List of refPicList reference images In some instances, such as when decoding a slice b, q video decoder 30 can also build a second independent reference image image RefPicjiistl, aliasnd RefPicListÕ.
[0150] [0150] At the beginning of the decoding process, for each slice, the video decoder 30 can derive the reference image lists RefPieLístô and, for slices B, RefPicListl as follows (where the italicized text represents the additions relative to the standard's current semantics): ll
[0151] [0151] The '30 video decoder can set {Variable NumRpsCurrTempList0 to equal Max (num _ ref _ idx _ 10 _ active _ minusl + 1, NumPocTotalCurr + NumAddRefPic) and build the Ref picListTemp0 list as follows, where gurrPig is the image at · ua1: rldx = 0 while (rldx <NumRpsCurrTempListO) {for (i = 0; i <NumPocS tCur rBef o re && rldx <NumRpsCurrTempLístO; rldx + 4-, i ++) Ref picListTempO [rldx] = RefPicS ij i: f (curr, pic_ as ref enab1ed_ f-lag) RefPi cListTempO [rldx] = currpic: for (í = 0; i <Numpocs tCur rA £ te and && rldx <NumRpsCurrTempLístO; rldx ++, i ++) jl- Rêf picLístTw [rldx] = RefpicsetS'tcurrAf ter [i] for (i = 0; í <NumPocLtCurr && rldx <NumRpsCurrTernpListO .; rldx ++, i ++) RefPícListTempO [rIdx] = RefPicS'e.tLtCurr [i]} I "ICI
[0152] [0152] The video decoder 30 can build the RefPicList list as follows, where CãtrPic is the current image: for (rldx = 0; rldx <= num refx_1O active _minus1; rldx-t4) - ^! RefPicList0 [rldx] ref _pic _ Iíst jn-odification _ flag_ 10 | _j 'RefPicListTempO [Iist _entry_ílO [rldx]]: RefPic.ListTempO [rldx] [OL53] In some examples, such as when Eatía is uina f.atía B, the video decoder 30 can set the variable as NumRpsCurrTempListl equal to '' lr, -I .d
[0154] [0154] In some examples, like 'when the slice is a B slice, video decoder 30 can build the RefPicListl List as follows: for (rldx = (j; rldx <= num_Fef i-dx 11 minusl active; rldx ++) RefPícLíst1 [rIdx] = ref_pic_1ist modijficatiQn_flag_1l RefPicListTempl [IíSt entry 11 [rldx]]: RefPícListTempl [rldx]. D.
[0155] [0155] FI.G. 7 is a flow diagram that includes exenrplary operations of a video encoder to encode a block of video data. My mother is an image based on a predictor block included in the same image, according to one of the most p = techniques feel irrigation. The techniques of FIG. 7 can be performed by one or more video encoders, such as a video encoder 20, illustrated in FIGS. I and 3. For purposes of illustration, the techniques of FIG. 7 are described in the context of a +
[0156] [0156] In accordance with one or more techniques of this invention, a video encoder 20 may store, in a reference image buffer, a version of an actual video data image (702). For example, the prediction processing unit 42 can store an initialized version of the current image in the reference image memory 64.
[0157] [0157] The videQ 20 encoder can select a current block (704), and determine a prediction model for the current block (706). The video encoder 20 can select the prediction mode from among a variety of different modes, which can be tested using, for example, BD rate, and the video eodifier. 20 can select the mode it provides Q-best performance of BD rate. Nc) example of FIG. 7, the video encoder 20 determines to encode the current block used [d'D Intra BC.
[0158] [0158] The video encoder 20 'can determine a predictor block for the current block (708). For example, video encoder 20 can determine q b1ocQ of prediction as a block in the current image found 'to better match c) current block, in terms of pixel difference, which can be determined by the sum of the absolute difference (SAD) , sum of difference of squares (SSD), or other metric difference. - ·: ', j
[0159] [0159] The 2Oè video encoder can insert the current image into a meferencing image list (RPL) used to predict the current image ( '& 0). In some examples, the video encoder 20 could eodify a .syntax element that indicates whether the images of the video data can be present in RPLs used for pe n "I CJ - 7_I
[0160] [0160] The video encoder 2'0 can determine an image index. current in the RPL (712). For example, when the current image is embedded in the RPL with a fixed index value, the video encoder 20 can be used as the current image index in the RPL is the value of the fixed index.
[0161] [0161] Video 20 can calculate a residual block for the current block (714). For example, the "adder 50 could subtract samples from the atial block from samples of the predicted block determined to calculate the residual b1ocp.
[0162] [0162] The video encoder 20 can quantize and transform the residual block ( 16). For example, transform processing unit 52 of video encoder 20 can apply a transform, such as. a discrete eosine transform (DCT) or a conceptually similar transform for the residual block, producing a video block comprising the residual transformer valleys. The quantization processing unit 54 of the video encoder 20 can quantize the transform coefficients to further reduce the bit rate.
[0163] [0163] the video codifier 20. can entropy the prediction mode, c) index, and quantized transform coefficients (718) ·, for example, the entropy coding unit 56 coding unit 'Using the entropy mode to predict the current bit, the index of the reference image for the current block (which can be the current block index in the RPL), and quantized transform coefficients. In this way, the video encoder 20 can perform Intra BC.
[0164] [0164] In some examples, video encoder 20 may perform exemplary operations in the order shown in FIG. 7. In some instances, c) video encoder 20 may perform exemplary operations in an order other than the order shown in FIG. 7. 'For example, in some examples, with a video encoder' 2Ü can insert the current RPL image used to predict the current image (710) before selecting c) current block ia current image 7 q
[0165] [0165] FIG. 8 is a flow diagram illustrating exemplary operations of a video deoodificator to decode a video data block of a base image in a predictor block included in the same image, according to one or more of the techniques of the present invention. The techniques of FIG. 8 can be performed by one or more video decoders, such as the video decoder 30 illustrated in FIGS. 1 and 5. For the purposes of illustration, the techniques of FIG, 8 are described in the context of video decoder 30, although video decoders having different configurations than the video decoder 30 can perform the techniques of FIG3a8.
[0166] [0166] Rm conFrmity with more technical features. of this invention, a video decoder 3.0 '"can store, in a reference image buffer, a version of a current video data image (802).'" For example, the prediction processing unit 71 can store a initialized version of the current image in reference image memory 82.
[0167] [0167] The "video 30" decoder can select a current block from the current image (804) and entropy decode a prediction mode, a reference image index, and quantized transform coefficients for the block current (806). |: For example, the entropy decoding unit 70 can perform entropy of one or more elements of syntax "which indicate that the mode of price for the current block is intra BC and the index of one r.e'ference image used 'for_I [redict the current block (for example, ref_idx_1X). In the example in FIG. 8, a , L- _
[0168] [0168] C) 30 video decoder to insert the current image into a reference image list (RPL) used to predict the current image (808). In some instances, the video decoder 30 can determine whether to insert the current RPL image based on the presence / value of one or more syntax elements. For example, the video decoder 3'0 can detect whether to insert the current image in the RPL based on a syntax value. that index s-e images of video members may be present in RPLs struggled to predict themselves (for example, curr} íc_as_ref_enabled_f1ag). As discussed above, in some examples, the video decoder 3f) can insert the current image into the correct Rp'l: an index value: less than the image index values in a long-term RPS, a value higher index than the image index valofks in a long-term RPS, or a fixed index value. In some instances, the video decoder 30 may construct the RPL used to predict the current image in such a way that the RE'L used to predict the current image includes only the current image. In some instances, the video decoder 30 can build the RPL used to predict the current tall_modQ image than the RPL used to predict the current image includes the current image and one or more other video data images.
[0169] [0169] The video decoder 30 can determine a predictor block for the b1oco | zatu-al (810). For example, the prediction processing unit 71 can determine the predictor block for the current block based on the reference image index, which then refers to the current image index in the RPL, and a vector " of motion that lj L
[0171] [0171] The "ATídeQ 30 decoder can reconstruct the current block (814). For example, adder 80 can add the residual block to the predictor block to reconstruct the current block. The video decoder 30 can update, after reconstructing the a € ua1 block, the version of the current image in the reference image buffer with an updated version to the current image, which: has included the current coded block, for example, q sum 80 can store the reconstructed samples of the current block in the image memory reference 82, for example, to prevent a poster block: Use one or more samples of the current block, as part or all of a predictor block, so the decoder 'qê video 30 can run Ihtra BC , "r" '0 "0 [0172) In some examples, the video-encoder 30 may perform the derLexempl.Q operations in the order shown in the FtG. 8. In some examples, the video decoder 30 may perform the exemplary operations in an order different from the order shown in the FTG. 8. '"Ê For example, in 7T L F L-' V _: J E,
[0173] [0173] The following listed examples may illustrate one or more aspects of this description:
[0174] [0174] 1. A method of encoding or decoding video data, the method comprising: storing, by a video encoding and in a reference image buffer, a version of a current image of daMs insert an indication of the current image into a reference image list (RPL) used during the prediction of blocks of the current image, and encode, by the video encoder and based on the RPL, a first bit of video data in the current image based on a preciying biaco of video data included in the version to give current image stored in the reference image buffer '.
[0175] [0175] 2. The method, according to claim 1, additionally comprising: curry, by the video encoder and after coding the first bl-hollow, the current image version in the reference image buffet with an updated version of the current image, which includes c) the first coded block; elumodí.ficar, by the video encoder, based on RPL, L == second block of video dices: in the current image with base and in a predictor block included in the updated version "jda the current image stored in the reference image buffer .
[0176] [0176] 3. The method, according to the name of claim 1, additionally comprising: '0'odi.ficar, by the video encoder, a syntax element that indicates if _i_l, 1, _ _ it: g
[0178] [0178] 5. The method, according to claim 4, in which the syntax element is a first element of syntax and the first block is included in a current slice of the current image, the method further comprising: encoding , by the video coder, ran: based on the first element of syntax indicating that images of the video data may be present in RPLS _nt.í1i · used to predict the images themselves, a second syntax elementQ that indicates whether the current image dõ: S "video data may be present in the RPL used" to predict the current slice, in which the second element of. syntax is included in an atmal slice header, since the determination to insert the current image into video data in an RPL used to predict the current image is additionally based on the second element of syntax.
[0179] [0179] 6. the method, according to claim: claim 5, in which to encode the second syntax e1e.o "comprises the code q second e1ementQ give syntax in a current slice header, before .Out | tDq £ elements of feel about modifying the RPLZI used to predict the current slice. - r ':, - C' jl r
[0180] [0180] 7. the method, according to claim t: 4, in which the method does not include encoding a Syntax element that indicates whether the first block is encoded using Intra BIoco Copy (Intra BC),
[0181] [0181] 8. the method, according to claim 1, in which the block is included in a current slice of the current image, and in which a syntax e1e-ment0 indicating a reference index placed for the current slice indicates a image that river is the current image.
[0182] [0182] 9. The method, according to the reiuiridication l, in which the block is included in a current Ilatia of the current image, and in which the predictor block is included in the current slice.
[0183] [0183] 10, The method, of -fl-acQrdcj with claim 1, in which each entry in the RPL has an index value, in which inserting the current image in the WL used to predict the current image comprises constructing the, used RPL to predict the current image based on one or more reference image sets (RPSs) of at least: insert the current image into the RPL with an index value less than the image index values in a long RPS term; Insert the a.tüal image in the RPL with an index value greater than the image index values in a long-term RPS; To insert the current image in the RPL with a fixed index value.
[0184] [0184] 11. The method, from "j - according to claim 1, and that the block is included in a current slice of the current image, in which the slice a £ uai is a intra slice, and in which to insert the current image-m: 'a RPL used to predict the current image comprises constructing the RPL used to predict the current image in such a way that the RPL used to predict the image up to and including only the current image . 7'j_, t; _ "i '
[0185] [0185] 12. The method, according to claim 1, in which the block is included in a current slice of the current image, in which the current slice is an inter slice, and in which the current image in the Rpl used for predicting the current image comprises constructing the used RPL to predict the current image in such a way that the RPL used to predict the current image includes the current image and one or more other video data images.
[0186] [0186] 13. The third, according to claim 1, further comprising: mark, by the video encoder before encoding the bLocD of the current image, the current image of the video data as a reference image in the long term; and mark, by video coder and after coding the imageiTrjatua1 block, the current image of the video data as an image: of short term reference. P
[0187] [0187] 14. The method, according to claim 1, in which encoding the block comprises coding the block, the method further comprising encoding, in a modified video bit stream, a representation of a vector that represents a -the distance between the video data block and the pF. video data editor block.
[0188] [0188] 15. The method, in accordance with claim 1, in which co-coding the: 'b1mco c'endende decode the block, the method comprising additionally determining, ciim baàe in a stream of bits encoded video, a veto that represents a 'distance between the video data block and the Êjredi ± Dr video data block. "],"
[0189] [0189] i6. The method, according to claim 1, in which storing version 3 of the current image in the reference image buffer comprises: storing,
[0190] [0190] 17. have a pat device, to cipher or decode video data, the device comprising: a reference image buffer configured to store one or more images of the video data; and one or more processors configured to perform the method of any combination of Examples 1 to 16.
[0191] [0191] 18. A device for encoding or decoding video data, the device comprises means for carrying out the method of combining any combination of examples 1 to 16.
[0193] [0193] 19. A computer-readable storage medium storing instructions that, when executed, cause one or more processors of a video encoder to execute any combination method 'from examples 1 to
[0193] [0193] Certain aspects of the description have been described i) d with respect to the HEVC standard for the purposes of illustration. However, the techniques described in the preceding description may be useful for other processes. video encoding, including other standard or proprietary video encoding protocols not yet developed, such as the El.266 video coding standard currently under development.
[0194] [0194] A video encoder, as described in the present description, can refer to a video encoder or a video decoder. Likewise, a video modification unit can refer to a 'video encoder' or a video video decoder. In the same way, video encoding can refer to '' "7 - | -I .rt" 1 "í
[0195] [0195] It is important to be recognized that, depending on the example, certain acts or events of any of the techniques described in this document can be performed in a different sequence, added, merged or left out (for example, not all the acts or events described are necessary for the practice of the techniques). In addition, and in certain examples, acts or events can be performed simultaneously, for example, through multiple segments of multi-threaded processing, multi-threaded processing, interrupt processing or multiple processors, instead sequentially.
[0196] [0196] In one or more examples, the functions described can be implemented in hardware, software, firmware or any combination of the same. If implemented in software, the Functions may be stored or transmitted as one or more instructions or code in a computer-readable medium and executed by a hardware-based processing level. The computer-readable medium may comprise the computer-readable storage medium, which corresponds to a medium: level, such as a data storage medium, ": or a communication medium, including any medium that facilitates the transfer of a computer program from place to place, for example, according to a communication protocol.
[0197] [0197] In this way, the computer-readable medium can generally correspond to (1) a tangible computer-readable storage level that is non-transitory or (2) a means of communication such as one end. Dlj a carrier wave C) -daMs storage medium can be: any available medium that can be accessed by one or · 0 I)
[0198] [0198] By way of example and not limitation, such a computer-readable storage area may include RAM, ROM, EEPROM, CD-ROM or other storage on optical disc, magnetic disk storage, or other magnetic storage devices, memory flash, or any other means that can be used to store the code of the desired program in the form of instructions or data structures and which can be accessed by a computer. In addition, any connection_ "is strictly referred to as a readable medium by_ComputerQr. For example, if instructions are transmitted by a website, server or other remote source using'bn coax cable, fiber optic cable, twisted pair, digital subscriber line (DSL) or infrared bed, radio and microwave technologies, then coax cable1,% b fiber optic cable, twisted pair, DSL or wireless technologies such as infrared, radio and microwaves are included in the definition of medium. '.R '
[0199] [0199] It should be understood, in the past, that a computer-readable storage medium and data storage medium do not comprise transit connections, carrier waves, signals or other means, but are instead used for lenders. non-transitory tangible storage devices. Discus, as used here, includes discCQmpactQ (CD), laser disc, optical disc, digital disk (.DVD), floppy disks and Blu-ray discs, where hard discs normally reproduce data magnetically, while "discs and compact discs 90/91
[0200] [0200] Instructions can be performed by one or more processors, such as one or more digital signal processors (DSPS), general purpose microprocessors, application specific integrated circuits (ASIC), programmable logic field matrices (FPGA) , or other integrated equivalent q the discrete logic circuits. Accordingly, the term "processor" as used herein can refer to any of the foregoing structure or any other suitable structure for applying the techniques described herein. In addition, in some respects, -the functionality described here can be provided within hardware modules and / or dedicated software conflated for encoding and decoding, or embedded in a combined codee. In addition, the techniques can be fully implemented in one or more circuits or logic elements,
[0201] [0201] The techniques of this designation can be implemented in a wide variety of devices or equipment, including a wireless telephone equipment, an integrated circuit (IC) or a set of ICs (for example, a chip set). Various peasants, modules or units are described in this description. To emphasize the functional aspects of devices configured to perform the techniques described, but which do not necessarily require realization by different hardware units.
Through the cantrárío, as described above, several units can be: r combined in a codec hardware unit or provided by a collection of interoperative hardware units, including one or more developers, as 'described' above, together. with "the appropriate .qoEWare and / or firmw'are, 7 '." r "
The
In 91/91 d 'i' 4 'Ç [0202) several examples of the description were presented. These and other examples are included in the scope of the following claims.
J ~ "C, 4
权利要求:
Claims (14)
[1]
1. Method of encoding or decoding video data, the method characterized by comprising: encoding, by a video encoder, a syntax element that indicates whether images of the video data can be present in reference image lists (RPLs) used to temporarily predict the images themselves; store, by the video encoder and in a reference image buffer, a version of a current image of the video data; insert, based on the syntax element indicating that images of the video data may be present in RPLs used to temporarily predict the images themselves, an indication of the current image in an RPL used during temporal prediction of blocks of the current image; and encode, by the video encoder and based on the RPL, a first block of video data in the current image based on a predictive block of video data included in the version of the current image stored in the reference image buffer.
[2]
2. Method, according to claim 1, characterized in that it further comprises: updating, by the video encoder and after encoding the first block, the version of the current image in the reference image buffer with an updated version of the current image that includes the first coded block; and encode, by the video encoder, based on the RPL, a second block of video data in the current image based on a predictor block included in the updated version of the current image stored in the reference image buffer.
[3]
3. Method, according to claim 1, characterized by the fact that encoding the syntax element comprises encoding the syntax element in a set of video parameters (VPS) referred to by the current image, a set of sequence parameters (SPS ) referred to by the current image, or a set of image parameters (PPS) referred to by the current image.
[4]
4. Method, according to claim 3, characterized by the fact that the method does not include encoding a syntax element that indicates whether the first block is encoded using Intra Block Copy (Intra BC).
[5]
5. Method, according to claim 1, characterized by the fact that the block is included in a current slice of the current image, and in which a syntax element indicating a co-located reference index for the current slice indicates an image which is not the current image.
[6]
6. Method, according to claim 1, characterized by the fact that the block is included in a current slice of the current image, and in which the predictor block is included in the current slice.
[7]
7. Method, according to claim 1, characterized by the fact that each entry in the RPL has an index value, in which inserting the current image in the RPL used to temporarily predict the current image comprises building the RPL used to temporarily predict the current image based on one or more reference image sets (RPSs) of at least: insert the current image into the RPL with an index value less than the image index values in a long-term RPS;
insert the current image into the RPL with an index value greater than the image index values in a long-term RPS; or insert the current image into the RPL with a fixed index value.
[8]
8. Method, according to claim 1, characterized by the fact that the block is included in a current slice of the current image, in which the current slice is an intra slice, and in which to insert the current image in the RPL used to predict temporarily the current image comprises building the RPL used to temporarily predict the current image so that the RPL used to temporarily predict the current image includes only the current image.
[9]
9. Method, according to claim 1, characterized by the fact that the block is included in a current slice of the current image, in which the current slice is an inter-slice, and in which to insert the current image in the RPL used to predict temporarily the current image comprises constructing the RPL used to temporarily predict the current image so that the RPL used to temporarily predict the current image includes the current image and one or more other images of video data.
[10]
10. Method according to claim 1, characterized in that it further comprises: marking, by the video encoder and before encoding the current image block, the current image of the video data as a long-term reference image; and mark, by the video encoder and after encoding the current image block, the current image of the video data as a short-term reference image.
[11]
11. Method according to claim 1, characterized by the fact that encoding the block comprises encoding the block, the method further comprising encoding, in a stream of encoded video bits, a representation of a vector representing a displacement between the video data block and the video data predictor block.
[12]
12. Method according to claim 1, characterized by the fact that encoding the block comprises decoding the block, the method further comprising determining, based on an encoded video bit stream, a vector representing a displacement between the block video data and the video data predictor block.
[13]
13. Device for encoding or decoding video data, the device characterized by comprising: means for encoding a syntax element that indicates whether images of the video data may be present in reference image lists (RPLs) used to predict the images themselves same; means for storing, in a reference image buffer, a version of a current image of the video data; means for inserting, based on the syntax element indicating which images of the video data may be present in RPLs used to predict the images themselves, an indication of the current image in an RPL used during block prediction of the current image; and means for encoding based on the RPL, a first block of video data in the current image based on a predictor block of video data included in the version of the current image stored in the reference image buffer.
[14]
14. Computer-readable storage medium, characterized by the fact that it stores instructions that, when executed, make one or more processors of a video encoder perform the method according to any one of claims 1 to 12.
1/8 10
ORIGIN DEVICE
I DEVICE OF 12 DESTINATION 14
VIDEO SOURCE DEVICE OF
DISPLAY 18 31 L; TdddE "VIDEO DECODER} ENCODER FROM: CE L VIDEO 2Q àQ
INTERFACE INTERFACE
OUTPUT INPUT u at 16 j F 'FlG. 1 - .
ML W
QJ «J" § !! i. C) kCj Y (i ,, NJ 9¶j "· P ·.
^. qm ¢ 9! i, © C'J ú 'qi !!;') 'i)):):,:. :::::: :::::::.' :: 'j ".' } m <D çq,:, ,,. · .'.-. · .- '.', '. cn
Ó '1 "' I Q" m i7 '), uj C9
QC
LL <m "'{f, m« J -....... ..... g; s: iq i: l -...........-..... ... .. '. """""·"-- ..·"·' · Ç- .'. . . -'- · '· "·" - "" -' "- '·" -' "- '·" - "" -' ·· '· "·" - "·" -l. . . . . . . . ..., "" "" "" "- '' - --'-" - "'" "'" "" "" '"j.'i"' "'' ¢ 5 ''" - :. : .-: ':': ·: ·, -. -.-.-. ·. · .-. ·,, ', -, ·,,'. ','. . ". '.q 0 · ·'.,..,. .... .. <<The e'j <új C'J: C '4
~ - - - -. . l r ——— -. -----.----------> ,; '£ ih I.! ¢ i! F- t *
V ¥ »
Q
N k and 4 0
H
Ç q
W # b jQ to jZ O
P O mT_ "l
Z uj
I
I
I
I '
I i)) i.) '|' J! ))! fi, ii l! 1 -J u '' f: 0 0 0 $ 0 ¢ 4
A 0 0 q q 0 $ 0 f 0
S 0 0) '5yF 7')) '! ii © àjá ge: u ° b: 'a
D Q d = U) Z
OO
THE ÇJ
Z O LLJ «e = O ljj" "m
Z Z = O ¥ "
V <
I
I and "| 0 U) çq"
I ',
I
I
I |
I C'J (3
T ") 'i'))) |
Q ") —u I") "jj)" ""
类似技术:
公开号 | 公开日 | 专利标题
BR112016023406A2|2020-12-22|USE A CURRENT IMAGE AS A REFERENCE FOR VIDEO ENCODING
US10448010B2|2019-10-15|Motion vector prediction for affine motion models in video coding
KR102051718B1|2019-12-03|Intra block copy prediction restrictions for parallel processing
US20160337662A1|2016-11-17|Storage and signaling resolutions of motion vectors
JP6378433B2|2018-08-22|AMVP and merge candidate list derivation for intra BC and inter prediction integration
ES2767103T3|2020-06-16|Determination of quantization parameter values | and delta QP values for palette-encoded blocks in video encoding
US10687077B2|2020-06-16|Motion information propagation in video coding
US20190230376A1|2019-07-25|Advanced motion vector prediction speedups for video coding
JP2018521539A|2018-08-02|Search range determination for intercoding within a specific picture of video data
JP2018507616A|2018-03-15|Flexible segmentation of prediction units
KR20210091174A|2021-07-21|Simplification of history-based motion vector prediction
KR102187729B1|2020-12-07|Inter-view predicted motion vector for 3d video
BR112016008358A2|2021-08-03|Combined bi-predictive fusion candidates for 3d video encoding
KR101722890B1|2017-04-05|More accurate advanced residual prediction | for texture coding
KR20150103122A|2015-09-09|Temporal motion vector prediction for video coding extensions
BR112020006232A2|2020-10-13|Affine prediction motion information encoding for video encoding
US10542280B2|2020-01-21|Encoding optimization with illumination compensation and integer motion vector restriction
BR112019019423A2|2020-04-14|intrapredict mode propagation
EP3522531A1|2019-08-07|Method for processing picture based on intra-prediction mode and apparatus for same
BR112021009606A2|2021-08-10|motion vector refinement on the decoder side
KR20150139953A|2015-12-14|Backward view synthesis prediction
TW202021354A|2020-06-01|Motion vector predictor list generation
BR112020011099A2|2020-11-17|intra-prediction with distant neighboring pixels
US10958932B2|2021-03-23|Inter-prediction coding of video data using generated motion vector predictor list including non-adjacent blocks
BR112021009732A2|2021-08-17|spatiotemporal motion vector prediction patterns for video encoding
同族专利:
公开号 | 公开日
CN106105215B|2020-04-21|
US20190349580A1|2019-11-14|
US10432928B2|2019-10-01|
EP3120555A1|2017-01-25|
EP3661208A1|2020-06-03|
MX2016011590A|2016-12-20|
WO2015143395A1|2015-09-24|
CN111818343A|2020-10-23|
DK3661208T3|2022-01-24|
MX360488B|2018-10-24|
JP6640102B2|2020-02-05|
US10863171B2|2020-12-08|
JP2017513332A|2017-05-25|
EP3661208B1|2021-12-22|
US20150271487A1|2015-09-24|
CN106105215A|2016-11-09|
KR20160135306A|2016-11-25|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

JP4166781B2|2005-12-09|2008-10-15|松下電器産業株式会社|Motion vector detection apparatus and motion vector detection method|
US8320456B2|2007-01-17|2012-11-27|Lg Electronics Inc.|Method and apparatus for processing a video signal|
CN101682784A|2007-04-19|2010-03-24|汤姆逊许可证公司|Adaptive reference picture data generation for intra prediction|
US8548041B2|2008-09-25|2013-10-01|Mediatek Inc.|Adaptive filter|
US8363721B2|2009-03-26|2013-01-29|Cisco Technology, Inc.|Reference picture prediction for video coding|
WO2011128272A2|2010-04-13|2011-10-20|Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.|Hybrid video decoder, hybrid video encoder, data stream|
US8603799B2|2010-07-30|2013-12-10|Bioworks, Inc.|Growth enhancement and control of bacterial and fungal plant diseases with Streptomyces scopuliridis |
CN105847830B|2010-11-23|2019-07-12|Lg电子株式会社|Prediction technique between being executed by encoding apparatus and decoding apparatus|
US9008181B2|2011-01-24|2015-04-14|Qualcomm Incorporated|Single reference picture list utilization for interprediction video coding|
US9288500B2|2011-05-12|2016-03-15|Texas Instruments Incorporated|Luma-based chroma intra-prediction for video coding|
KR20120138712A|2011-06-15|2012-12-26|광운대학교 산학협력단|Method and apparatus for scalable encoding and decoding|
US10298939B2|2011-06-22|2019-05-21|Qualcomm Incorporated|Quantization in video coding|
US8396867B2|2011-07-13|2013-03-12|Nimblecat, Inc.|Identifying and ranking networked biographies and referral paths corresponding to selected qualifications|
US9237356B2|2011-09-23|2016-01-12|Qualcomm Incorporated|Reference picture list construction for video coding|
KR20130037161A|2011-10-05|2013-04-15|한국전자통신연구원|Method and apparatus of improved inter-layer motion prediction for scalable video coding|
US8693793B2|2012-01-19|2014-04-08|Sharp Laboratories Of America, Inc.|Reducing reference picture set signal overhead on an electronic device|
US20130188717A1|2012-01-20|2013-07-25|Qualcomm Incorporated|Motion prediction in svc using partition mode without split flag|
PT2822276T|2012-02-29|2018-12-18|Lg Electronics Inc|Inter-layer prediction method and apparatus using same|
US9503702B2|2012-04-13|2016-11-22|Qualcomm Incorporated|View synthesis mode for three-dimensional video coding|
PL2840789T3|2012-04-15|2018-11-30|Samsung Electronics Co., Ltd.|Parameter update method for entropy decoding of conversion coefficient level, and entropy decoding device of conversion coefficient level using same|
US9532046B2|2012-04-16|2016-12-27|Qualcomm Incorporated|Reference picture set prediction for video coding|
DK2830313T3|2012-04-16|2019-10-28|Samsung Electronics Co Ltd|METHOD AND APPARATUS FOR DETERMINING A REFERENCE IMAGE SET.|
US9491461B2|2012-09-27|2016-11-08|Qualcomm Incorporated|Scalable extensions to HEVC and temporal motion vector prediction|
US9392268B2|2012-09-28|2016-07-12|Qualcomm Incorporated|Using base layer motion information|
KR20220011223A|2012-09-28|2022-01-27|엘지전자 주식회사|Video decoding method and apparatus using the same|
EP2966868B1|2012-10-09|2018-07-18|HFI Innovation Inc.|Method for motion information prediction and inheritance in video coding|
JP6166380B2|2012-12-14|2017-07-19|エルジー エレクトロニクス インコーポレイティド|Video encoding method, video decoding method, and apparatus using the same|
US9674542B2|2013-01-02|2017-06-06|Qualcomm Incorporated|Motion vector prediction for video coding|
US20160065982A1|2013-04-05|2016-03-03|Samsung Electronics Co., Ltd.|Method for determining whether or not present image is reference image, and apparatus therefor|
US20140301456A1|2013-04-08|2014-10-09|Qualcomm Incorporated|Inter-layer picture signaling and related processes|
US10015515B2|2013-06-21|2018-07-03|Qualcomm Incorporated|Intra prediction from a predictive block|
US9565454B2|2013-06-24|2017-02-07|Microsoft Technology Licensing, Llc|Picture referencing control for video decoding using a graphics processor|
US20150016533A1|2013-07-12|2015-01-15|Qualcomm Incorporated|Intra motion compensation extensions|
US10313682B2|2013-08-26|2019-06-04|Qualcomm Incorporated|Determining regions when performing intra block copying|
US20150063454A1|2013-08-27|2015-03-05|Qualcomm Incorporated|Residual prediction for intra block copying|
US20150071357A1|2013-09-12|2015-03-12|Qualcomm Incorporated|Partial intra block copying for video coding|
US20150098504A1|2013-10-09|2015-04-09|Qualcomm Incorporated|Block vector coding for intra block copying|
US10531116B2|2014-01-09|2020-01-07|Qualcomm Incorporated|Adaptive motion vector resolution signaling for video coding|
US20150264383A1|2014-03-14|2015-09-17|Mitsubishi Electric Research Laboratories, Inc.|Block Copy Modes for Image and Video Coding|
US10432928B2|2014-03-21|2019-10-01|Qualcomm Incorporated|Using a current picture as a reference for video coding|
US10477232B2|2014-03-21|2019-11-12|Qualcomm Incorporated|Search region determination for intra block copy in video coding|
US9924191B2|2014-06-26|2018-03-20|Qualcomm Incorporated|Filters for advanced residual prediction in video coding|
CA2965720C|2014-11-20|2020-04-14|Hfi Innovation Inc.|Method of motion vector and block vector resolution control|
WO2017041692A1|2015-09-08|2017-03-16|Mediatek Inc.|Method and system of decoded picture buffer for intra block copy mode|WO2015054813A1|2013-10-14|2015-04-23|Microsoft Technology Licensing, Llc|Encoder-side options for intra block copy prediction mode for video and image coding|
CN105765974B|2013-10-14|2019-07-02|微软技术许可有限责任公司|Feature for the intra block of video and image coding and decoding duplication prediction mode|
US10432928B2|2014-03-21|2019-10-01|Qualcomm Incorporated|Using a current picture as a reference for video coding|
AU2014202921B2|2014-05-29|2017-02-02|Canon Kabushiki Kaisha|Method, apparatus and system for de-blocking a block of video samples|
WO2015192353A1|2014-06-19|2015-12-23|Microsoft Technology Licensing, Llc|Unified intra block copy and inter prediction modes|
US10412387B2|2014-08-22|2019-09-10|Qualcomm Incorporated|Unified intra-block copy and inter-prediction|
WO2016050219A1|2014-09-30|2016-04-07|Mediatek Inc.|Method of adaptive motion vetor resolution for video coding|
CN105874795B|2014-09-30|2019-11-29|微软技术许可有限责任公司|When wavefront parallel processing is activated to the rule of intra-picture prediction mode|
US9918105B2|2014-10-07|2018-03-13|Qualcomm Incorporated|Intra BC and inter unification|
US9854237B2|2014-10-14|2017-12-26|Qualcomm Incorporated|AMVP and merge candidate list derivation for intra BC and inter prediction unification|
CN111818340A|2015-05-29|2020-10-23|寰发股份有限公司|Method for managing decoding image buffer and decoding video bit stream|
JP6722701B2|2015-06-08|2020-07-15|ヴィド スケール インコーポレイテッド|Intra block copy mode for screen content encoding|
EP3310054A4|2015-06-11|2019-02-27|Intellectual Discovery Co., Ltd.|Method for encoding and decoding image using adaptive deblocking filtering, and apparatus therefor|
WO2017041692A1|2015-09-08|2017-03-16|Mediatek Inc.|Method and system of decoded picture buffer for intra block copy mode|
US10097836B2|2015-09-28|2018-10-09|Samsung Electronics Co., Ltd.|Method and device to mark a reference picture for video coding|
US10666936B2|2015-12-17|2020-05-26|Samsung Electronics Co., Ltd.|Video decoding method and video decoding apparatus using merge candidate list|
CA3025490A1|2016-05-28|2017-12-07|Mediatek Inc.|Method and apparatus of current picture referencing for video coding using affine motion compensation|
US20180199062A1|2017-01-11|2018-07-12|Qualcomm Incorporated|Intra prediction techniques for video coding|
EP3577899A4|2017-01-31|2020-06-17|Sharp Kabushiki Kaisha|Systems and methods for scaling transform coefficient level values|
CN109089119B|2017-06-13|2021-08-13|浙江大学|Method and equipment for predicting motion vector|
US11012715B2|2018-02-08|2021-05-18|Qualcomm Incorporated|Intra block copy for video coding|
US10798376B2|2018-07-17|2020-10-06|Tencent America LLC|Method and apparatus for video coding|
US11140418B2|2018-07-17|2021-10-05|Qualcomm Incorporated|Block-based adaptive loop filter design and signaling|
WO2020086317A1|2018-10-23|2020-04-30|Tencent America Llc.|Method and apparatus for video coding|
WO2020103943A1|2018-11-22|2020-05-28|Beijing Bytedance Network Technology Co., Ltd.|Using collocated blocks in sub-block temporal motion vector prediction mode|
CN111372085B|2018-12-25|2021-07-09|厦门星宸科技有限公司|Image decoding device and method|
CN109874011B|2018-12-28|2020-06-09|杭州海康威视数字技术股份有限公司|Encoding method, decoding method and device|
CN113383543A|2019-02-02|2021-09-10|北京字节跳动网络技术有限公司|Prediction using extra buffer samples for intra block copy in video coding|
KR20210121014A|2019-02-02|2021-10-07|베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드|Buffer initialization for intra block copying in video coding|
US20200252621A1|2019-02-06|2020-08-06|Tencent America LLC|Method and apparatus for neighboring block availability in video coding|
WO2020180100A1|2019-03-04|2020-09-10|엘지전자 주식회사|Intra block coding-based video or image coding|
WO2021027862A1|2019-08-13|2021-02-18|Beijing Bytedance Network Technology Co., Ltd.|Motion precision in sub-block based inter prediction|
WO2021118263A1|2019-12-12|2021-06-17|엘지전자 주식회사|Method and device for signaling image information|
WO2021054869A2|2020-01-23|2021-03-25|Huawei Technologies Co., Ltd.|Reference picture management methods for video coding|
法律状态:
2021-01-05| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]|
2021-01-12| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]|
2021-01-12| B15K| Others concerning applications: alteration of classification|Free format text: A CLASSIFICACAO ANTERIOR ERA: H04N 19/503 Ipc: H04N 19/176 (2014.01), H04N 19/503 (2014.01), H04N |
2021-12-14| B350| Update of information on the portal [chapter 15.35 patent gazette]|
优先权:
申请号 | 申请日 | 专利标题
US201461969022P| true| 2014-03-21|2014-03-21|
US61/969,022|2014-03-21|
US201462000437P| true| 2014-05-19|2014-05-19|
US62/000,437|2014-05-19|
US14/663,155|2015-03-19|
US14/663,155|US10432928B2|2014-03-21|2015-03-19|Using a current picture as a reference for video coding|
PCT/US2015/021866|WO2015143395A1|2014-03-21|2015-03-20|Using a current picture as a reference for video coding|
[返回顶部]