专利摘要:
patent summary: "adaptive content partitioning for next generation video prediction and coding". The present invention relates to techniques related to content adaptive partitioning for prediction and coding. A method for video coding partitioning comprises receiving a video frame, segmenting the video frame into a plurality of tiles, coding units or superfragments, determining a partitioning technique chosen for at least one tile, coding unit or superfragment for coding partitioning, the partitioning technique of choice comprising a structured partitioning technique comprising at least one of a binary tree partitioning technique, a kd tree partitioning technique, or a codebook representation of a technique kd tree partitioning, partition the at least one tile, coding unit, or superfragment into a plurality of prediction partitions using the chosen partitioning technique, and perform coding partitioning of indicators or codebooks associated with the plural age of prediction partitions in a bitstream.
公开号:BR112015015575A2
申请号:R112015015575
申请日:2013-12-24
公开日:2020-02-04
发明作者:Puri Atul;N Gokhale Neelesh
申请人:Intel Corp;
IPC主号:
专利说明:

Descriptive Report of the Invention Patent for ACTIVE ADAPT PARTITIONING TO THE CONTENT FOR THE FORECAST AND CODING FOR THE NEXT GENERATION VIDEO.
RELATED REQUESTS [001] This request claims the benefit of Provisional Application No. 2 US 61 / 758,314 filed on January 30, 2013 and entitled NEXT GENERATION VIDEO CODING, the content of which is now fully incorporated.
BACKGROUND [002] A video encryption compresses video information so that more information can be sent over a given bandwidth. The compressed signal can then be transmitted to a receiver that has a decoder that decodes or decompresses the signal before displaying.
[003] High Efficiency Video Coding (HEVC) is the latest video compression standard, which is developed by the Joint Collaborative Team on Video Coding (JCT-VC) formed by ISO / IEC Moving Picture Experts Group (MPEG) and ITU-T Video Coding Experts Group (VCEG). HEVC is developed in response to the previous standard, H.264 / AVC (Advanced Video Coding) not having the ability to provide sufficient compression to evolve higher resolution video applications. Similar to previous video coding standards, HEVC includes basic functional modules, such as intra / inter prediction coding, transformation, quantization, filtration circuit and entropy.
[004] The continuous HEVC standard can try to improve the limitations of the H.264 / AVC standard such as limited choices for forecast partitions and allowed coding partitions, limited allowed multiple reference and forecast generation, transformation block sizes and actual transformations limited, limited mechanisms
2/108 to reduce inefficient entropy encryption artifacts and encryption techniques. However, the continuous HEVC standard can use interactive approaches to solve such problems.
BRIEF DESCRIPTION OF THE DRAWINGS [005] The material described in this document is illustrated by way of example and not by way of limitation in the attached Figures. Aiming at the simplicity and clarity of the illustration, the elements illustrated in the Figures are not necessarily drawn to scale. For example, the dimensions of some elements may be exaggerated in relation to other elements for clarity. Furthermore, where it was considered appropriate, the reference identifications were repeated among the Figures to indicate corresponding or similar elements. In the Figures:
[006] FIGURE 1 is an illustrative diagram of an exemplary next generation video encryption;
[007] FIGURE 2 is an illustrative diagram of an exemplary next generation video decoder;
[008] FIGURE 3 illustrates an example video frame that has example tiles, encoding units or superfragments to partition;
[009] FIGURE 4 illustrates exemplary superfragments of a row of tiles in an exemplary video frame;
[0010] FIGURE 5 illustrates an exemplary segmentation of the region layer of a video frame;
[0011] FIGURES 6 (A) and 6 (B) illustrate an exemplary video frame segmented into region layers and partitioned according to tiles in superfragments;
[0012] FIGURE 7 is a flowchart showing a subset of an exemplary encryption process;
[0013] FIGURE 8 illustrates an exemplary partitioning of
3/108 a portion of the frame using a binary tree partitioning technique;
[0014] FIGURE 9 illustrates an exemplary partitioning of a frame portion using a k-d tree partitioning technique [0015] FIGURE 10 illustrates an exemplary bit stream;
[0016] FIGURE 11 is a flowchart showing an exemplary decoding process;
[0017] FIGURES 12 (A) and 12 (B) are illustrative diagrams of exemplary cryptographic subsystems;
[0018] FIGURE 13 is an illustrative diagram of a decoder subsystem;
[0019] FIGURES 14 (A) and 14 (B) together provide a detailed illustration of a combined example of a video encryption and decryption system and process;
[0020] FIGURE 15 is an illustrative diagram of an exemplary video encoding system;
[0021] FIGURE 16 is an illustrative diagram of an exemplary system;
[0022] FIGURE 17 illustrates an exemplary device;
[0023] FIGURES 18 (A), 18 (B) and 18 (C) illustrate forecasting partitions and exemplary coding partitions for a video frame, all arranged according to at least some of the implementations of the present description.
Detailed Description [0024] One or more modalities or implementations will now be described, with reference to the included Figures. Although specific configurations and arrangements are discussed, it should be understood that this is done for illustrative purposes only. People skilled in the relevant technique will recognize that others
4/108 tions and dispositions can be used without departing from the spirit and scope of the description. It will be apparent to those skilled in the relevant art that the techniques and / or arrangements described in this document can also be used in a variety of other systems and applications in addition to what is described in this document.
[0025] Although the following description establishes several deployments that can be manifested in architectures such as system architectures on a chip (SoC) for example, the implementation of the techniques and / or provisions described in this document are not restricted to architectures and / or systems specific computing systems and can be deployed by any computing architecture and / or system for similar purposes. For example, various architectures that employ, for example, multiple integrated circuit (IC) chips and / or packages and / or various computing devices and / or consumer electronics (CE) devices, such as decoders, smart phones, etc. ., may implement the techniques and / or provisions described in this document. Furthermore, while the following description may establish several specific details, such as logical deployments, types and interrelationships of system components, choices of logical partitioning / integration, etc., the claimed matter can be practiced without such specific details. In other cases, some material, such as, for example, complete control structures and software instruction sequences, may not be shown in detail in order not to omit the material described in this document.
[0026] The material described in this document can be deployed in hardware, firmware, software or any combination thereof. The material described in this document can also be deployed as instructions stored in a machine-readable medium, which can be read and executed by one or more processors. a
5/108 machine-readable medium may include any medium and / or mechanism for storing or transmitting information in a machine-readable form (for example, a computing device). For example, a machine-readable medium may include a read-only memory (ROM); random access memory (RAM); magnetic disk storage means; optical storage media; flash memory devices; electrical, optical, acoustic or other forms of propagated signals (for example, carrier waves, infrared signals, digital signals, etc.); and others.
[0027] References in the specification to an implantation, an exemplary implantation, etc., indicate that the implantation described may include a specific resource, structure or characteristic, but all modalities may not necessarily include the resource, structure or characteristic specifics. Furthermore, such phrases are not necessarily related to the same implementation. Furthermore, when a specific feature, structure or feature is described in relation to a modality, it is understood that it is known to one skilled in the art to perform such feature, structure or feature in relation to other implementations explicitly described or not in this document. .
[0028] The systems, apparatus, articles and methods are described below in relation to adaptive partitioning to the content for the prediction and encoding for the next generation video encoding.
[0029] The next generation (NGV) video systems, apparatus, articles and methods are described below. NGV video encoding can incorporate a capacity for significant content-based adaptation in the video encoding process to achieve superior compression. As discussed above, the H.264 / AVC standard can have a variety of limitations and attempts
6/108 to improve the standard, such as, for example, the HEVC standard can use interactive approaches to address such limitations. In this document, an NGV system that includes an encryption and a decoder will be described.
[0030] In addition, as discussed, the H.264 / AVC standard may include limited choices of forecast partitions and encoding partitions. In particular, as discussed in this document, a video frame can be received for encoding. In some examples, the video frame can be segmented into tiles, encoding units or superfragments (for example, tiles, encoding units or superfragments can be described as a portion of frames in this document). For example, a tile or encoding unit can be a square or rectangular portion of the video frame. The video frame can be completely divided into a plurality of tiles or coding unit or coding units, for example. In other examples, the video frame can be segmented into superfragments. For example, a video frame can be segmented into two or more region layers. In some examples, the region layers may represent a front plane, a background, and the midplane of a scene or the like. In such examples, the video frame can also be divided into tiles. A superfragment can include an individual region layer portion of a tile. For example, if a tile includes only one region layer, the superfragment can be the entire tile. If a tile includes two layers of region, the tile can be divided into two superfragments, a superfragment that includes the tile portion that has a first region layer and a second superfragment that includes the tile portion that has the second region layer and so on. Superfragments can be of any shape and can be either contiguous
7/108 and non-contiguous.
[0031] In any case, a chosen technique can be determined for the prediction partitioning of a tile, coding unit or superfragment of a video frame. In some examples, the chosen technique can be chosen based on a type of photo in the video frame. In other examples, the chosen technique can be chosen based on a characteristic of the tile, coding unit or superfragment that is partitioned. In some examples, the chosen technique can be chosen from a binary tree partitioning or a kd tree partitioning. In some NGV deployments, three types of photo can be used (although subtypes can also be used): photo-l (for example, intra-compensation only), photo-P (for example, predictable) or photo-F / B ( for example, functional / bidirectional). As discussed, in some examples, the technique chosen may be based on the type of photo in the video frame. For example, if the photo type is photo-l, the chosen technique can be a k-d tree partitioning and if the photo type is photo-P or photo-F, the chosen technique can be a binary tree partitioning. Based on the chosen forecasting partitioning technique, the frame portion can be partitioned into any number of forecasting partitions.
[0032] Several potential or candidate forecast partitions in forecast partitions can be performed. The forecast partitions of the candidate partitions can be indexed and transmitted to a cryptography controller, which can determine which forecast partitions (for example, forecast partitioning) to use in various encryption deployments (for example, a variety of partitions that have different forecast partitions can be evaluated using rate distortion optimization or the like to determine a forecast partitioning if
8/108 taught). In addition, forecast partitions can be used for interprevision (for example, motion compensation) or intraprevision. The data associated with the forecast partitions (for example, the format and location of the partition in the video frame or the like) and the data of inter-forecast or intra-forecast can be encrypted in a bit stream for transmission to a decoder as discussed in this document through indicators or code words or similar.
[0033] Furthermore, through a decoding cycle implanted in an encryption, a predicted partition (for example, predicted pixel data associated with a prediction partition) can be generated. The predicted partition and the actual partition (for example, original pixel data) can be differentiated to determine a prediction error data partition (for example, an error or residue signal). It is possible to determine whether the prediction error data partition needs to be encrypted (for example, quantized and encrypted by transformation) through a threshold or similar. In some examples, the forecast error data partition associated with the forecast partition can be coded directly. For example, frames that are 1-photos can be encoded without additional subpartitioning, such as encoding partitioning. In other examples, the preview partition can be additionally partitioned into encoding partitions or chips before encoding. For example, frames that are P-photos or F-B / photos can be additionally partitioned before encoding (for example, partitioned encoding). In some examples, coding partitioning can be accomplished through binary tree partitioning. As with forecast partitioning, several potential or candidate coding partitions can be performed. The encoding partitions of the candidate encoding partitions can be
9/108 indexed and transmitted to an encryption controller, which can determine which encryption partitions to use in encryption or the like (for example, a variety of partitions that have different encryption partitions can be evaluated - along with various types of transformations in some examples - through rate distortion optimization or the like to determine a selected encoding partitioning). The data associated with the encoding partitions (for example, the format and location of the encoding partition using indicators or codewords or the like) and the associated forecast error data can be encoded by transformation, quantized and encrypted in a chain of data. bits for transmission to a decoder as discussed in this document.
[0034] In some examples, a decoder can receive and decode the bit stream to determine inter-forecast or intra-forecast data associated with a forecast partition, data that defines the forecast partition (for example, indicators or code words, as discussed), data associated with an individual forecast error data partition (for example, quantized coefficients) or similar. Interprevision or intraprevision can be performed on the forecast partition as discussed in this document and further processing can be performed for video frames generated for the presentation. In addition, the data that defines the prediction partition (for example, encoding partitions of prediction error data partitions) can be inversely quantized and transformed inversely to generate decoded encoding partitions, which can be combined to generate data partitions of decoded forecast errors. Decoded forecast error data partitions can be added with predicted (decoded) partitions to generate a reconstructed partition, which
10/108 can be mounted to other reconstructed partition (s) to generate a tile, coding unit or superfragment. Unblocking filtration and / or optional quality restoration filtration can be applied to the tile, coding unit or superfragment, which can be mounted to other tile (s) or coding or superfragment unit (s) to generate a decoded video frame. The decoded video frame can be used for decoding other frames and / or transmitted for presentation via a display device.
[0035] As used in this document, the term encoder can refer to an encryptor and / or a decoder. Similarly, as used in this document, the term encoding can refer to performing video encryption using an encryption and / or performing video decoding using a decoder. For example, a video encryption and video decoder can both be examples of encoders that can encode video data. In addition, as used in this document, the term codec can refer to any process, program or set of operations, such as, for example, any combination of software, firmware and / or hardware that may deploy an encryption and / or decoder . In addition, as used in this document, the phrase video data can refer to any type of data associated with video encoding such as, for example, video frames, image data, encrypted bit stream data or the like.
[0036] FIGURE 1 is an illustrative diagram of an exemplary next generation video encoder 100, arranged in accordance with at least some implementations of the present description. As shown, cryptograph 100 can receive an incoming video 101. Incoming video 101 can include any incoming video
11/108 for encryption such as, for example, input frames of a video stream. As shown, input video 101 can be received via a content pre-analyzer module 102. Content pre-analyzer module 102 can be configured to perform an analysis of the video frame content of input video 101 to determine various types of parameters to improve the performance and speed of video encoding. For example, the content pre-analyzer module 102 can determine horizontal and vertical gradient information (for example, Rs, Cs), variance, spatial complexity per photo, temporal complexity per photo, scene change detection, range estimation motion, gain detection, forecast distance estimate, object quantity estimate, region limit detection, spatial complexity map computation, focus estimate, film gain estimate or similar. The parameters generated through the content pre-analyzer module 102 can be used by encryption 100 (for example, through encryption controller 103) and / or quantized and communicated to a decoder. As shown, video frames and / or other data can be transmitted from the content pre-analyzer module 102 to the adaptive photo organizer module 104, which can determine the type of photo (for example, photo-1, photo-P or photo-F / B) of each video frame and rearrange the video frames as needed. In some examples, the adaptive photo organizer module 104 may include a frame portion generator configured to generate frame portions. In some examples, the content pre-analyzer module 102 and the adaptive photo organizer module 104 can be considered together as a cryptograph pre-analyzer subsystem 100.
[0037] As shown, the video frames and / or others of the
12/108 dos can be transmitted from the adaptive photo organizer module 104 to the forecast partition generator module 105. In some examples, the forecast partition generator module 105 can divide a frame or photo into tiles, coding units or superfragments or similar. In some examples, an additional module (for example, between modules 104 and 105) may be provided to divide a frame or photo into tiles, coding units or superfragments. The forecast partition generator module 105 can divide each tile, coding unit or superfragment into partitioning or potential forecast partitions (e.g., candidates). In some examples, potential forecast partitioning can be determined using a partitioning technique, such as, for example, a kd tree partitioning technique, a binary tree partitioning technique or the like, which can be determined with based on the type of photo (for example, photo-1, photo-P or photo-F / B) of individual video frames, with a characteristic of the frame portion being partitioned or similar. In some examples, the determined potential forecast partitions can be partitions for the forecast (for example, interprevision or intraprevision) and can be described as forecast partitions or forecast blocks or the like.
[0038] In some examples, a selected forecast partitioning (for example, forecast partitions) can be determined from potential forecast partitions. For example, the selected forecast partitioning can be based on determination, for each potential forecast partitioning, where forecasts use multiple reference forecasts or intrapredictions based on characteristics and movement and determine forecast parameters. For each potential forecast partitioning, a potential forecast error can be determined by
13/108 differentiation of original pixels with forecast pixels and the selected forecast partitioning can be the potential forecast partitioning with the minimum forecast error In other examples, the selected forecast partitioning can be determined based on a distortion optimization of rate that includes a weighted score based on the number of bits to encode partitioning and a forecast error associated with forecast partitioning.
[0039] As shown, the original pixels of the selected forecast partitioning (for example, forecast partitions from a forecast error) can be differentiated with predicted partitions (for example, pixel data from a forecast of the forecast partition from the forecast error) forecast based on a frame of reference or tables and other forecast data, such as inter-forecast or intra-forecast data) in differentiator 106. The determination of predicted partitions will be further described below and may include a decoding cycle as shown in FIGURE 1. Any residuals or residual data (for example, partition prediction error data) from the differentiation can be transmitted to the encoding partition generator module 107. In some examples, such as for the prediction of forecast partitions in any type of photo (L-photos, F-B-photos or P-photos), the 107 encoding partition generator module can be via switches 107a and 107b. In such examples, only a single level of partitioning can be performed. Such partitioning can be described as prediction partitioning (as discussed) or coding partitioning or both. In several examples, such partitioning can be done through the forecast partition generator module 105 (as discussed) or, as discussed later in this document, such partitioning can be done through a partitioning module
14/108 k-d tree coding / intraprevision painter or a binary tree coding / intraprevisioning partitioner module implanted through the coding partition generator module 107.
[0040] In some examples, partition prediction error data, if any, may not be significant enough to guarantee encryption. In other examples, where it may be desirable to encode the partition prediction error data and the partition prediction error data are associated with the interpreter or similar, the encoding partition generator module 107 can determine the encoding partitions of the partition data. forecast. In some instances, the encoding partition generator module 107 may not be necessary, due to the fact that the partition can be encrypted without encoding partitioning (for example, as shown through the bypass path available through switches 107a and 107b) . With or without encoding partitioning, the partition prediction error data (which can subsequently be described as encoding partitions in any way) can be transmitted to adaptive transformation module 108 in the case of residues or data residuals require encryption. In some examples, the forecast partition generator module 105 and the encoding partition generator module 107 can be considered together as an encryption partition partition subsystem 100. In several examples, the encoding partition generator module 107 can operate on partition prediction error data, original pixel data, residual data or wavelet data.
[0041] The encoding partition generator module 107 can generate potential encoding partitions (for example, encoding partitions), for example, from partition prediction error data using the kd and / or tree partitioning techniques binary tree or similar. In some examples, code partitions
15/108 potential fications can be transformed using fixed or adaptive transformations with various block sizes via adaptive transformation module 108 and a selected coding partitioning and the selected transformations (for example, adaptive or fixed) can be determined with based on a rate distortion optimization or based on something else. In some examples, the selected encoding partitioning and / or the selected transformation (s) can be determined based on a predetermined selection method based on the size of encoding partitions or the like.
[0042] For example, the adaptive transformation module 108 can include a first portion or component to perform a parametric transformation to allow a locally ideal transformation encoding of small to medium sized blocks and a second portion or component to perform a coding low and globally stable suspended transformation using a fixed transformation, such as a distinct cosine transformation (DCT) or a photo-based transformation from a variety of transformations, including parametric transformations or some other configuration as discussed later in this document. In some examples, for locally ideal transformation coding, a Parametric Haar Transformation (PHT) can be performed, as is discussed later in this document. In some examples, transformations can be carried out in 2D blocks of rectangular sizes between about 4 x 4 pixels and 64 x 64 pixels, the actual sizes of which depend on numerous factors, such as the possibility of the transformed data being luma or chroma or inter or intra or, if the given transformation used is PHT or DCT or similar.
[0043] As shown, the transformation coefficients re
16/108 sultants can be transmitted to the adaptive quantization module 109. The adaptive quantization module 109 can quantize the resulting transformation coefficients. In addition, any data associated with a parametric transformation, as needed, can be transmitted to both adaptive quantization module 109 (if quantization is desired) and adaptive entropy cryptograph module 110. In addition, as shown in FIGURE 1, the quantized coefficients can be scanned and transmitted to the adaptive entropy encryption module 110. The adaptive entropy encryption module 110 can encrypt the quantized coefficients through entropy and include them in the output bit stream 111. In some examples, the adaptive transformation module 108 and the adaptive quantization module 109 can be considered together as a cryptographic transformation cryptography subsystem 100.
[0044] As also shown in FIGURE 1, the encryption 100 includes a local decryption cycle. The local decoding cycle can be started on the adaptive inverse quantization module 112. The adaptive inverse quantization module 112 can be configured to perform the opposite operation (s) from the adaptive quantization module 109, such as a inverse scanning can be performed and quantified coefficients can be scaled to determine transformation coefficients. Such an adaptive quantization operation can generate loss, for example. As shown, the transformation coefficients can be transmitted to an adaptive reverse transformation module 113. The adaptive reverse transformation module 113 can perform the reverse transformation as performed by adaptive transformation module 108, for example, to generate residuals or residual values or partition prediction error data (either original data or wave data
17/108 tas, as discussed) associated with encryption partitions (or prediction partitions if encryption partitions are not used or only one level of partitioning is employed, such partitions can be considered to be encryption or partition partitions). In some examples, the adaptive inverse quantization module 112 and the adaptive inverse transformation module 113 can together be considered a cryptographic transformation decoder subsystem 100.
[0045] As shown, partition prediction error data (or similar) can be transmitted to optional encoding partition assembler 114. Encoding partition assembler 114 can mount encoding partitions on decoded forecast partitions as needed ( as shown, in some examples, the encoding partition assembler 114 can be bypassed via switches 114a and 114b, so that the decoded prediction partitions may have been generated in the adaptive reverse transformation module 113) to generate prediction partitions from forecast error data or decoded or similar residual forecast partitions.
[0046] As shown, the decoded residual forecast partitions can be added to the predicted partitions (for example, forecast pixel data) in adder 115 to generate reconstructed forecast partitions. The reconstructed forecast partitions can be transmitted to the forecast partition builder 116. The forecast partition builder 116 can mount the reconstructed forecast partitions to generate reconstructed tiles or a coding or superfragment unit. In some examples, the encoding partition builder module 114 and the forecast partition builder module 116 can together be considered a non-partitioning subsystem of the encryption 100.
18/108 [0047] The reconstructed tiles or the coding or superfragment unit can be transmitted to the block forming and filtering analyzer module 117. The blocking and filtering analyzer module 117 can unlock and delay the tiles reconstructed or the coding unit or superfragments (or tile preview partitions or the coding unit or superfragments). The generated release and delay filter parameters can be used for the present filter operation and / or encoded in bit stream 111 for use via a decoder, for example. The output of the block forming and filtering analyzer module 117 can be transmitted to a quality analyzer and filtering module for quality restoration 118. The quality analyzer and filtering module of quality restoration 118 can determine the parameters QR filter (for example, for a QR decomposition) and uses the parameters determined for filtration. The QR filtration parameters can also be encoded in bit stream 111 for use by the decoder. As shown, the output of the quality restoration analyzer and filtration module 118 can be transmitted to the decoded photo temporary storage 119. In some examples, the output of the quality restoration analyzer and filtration module 118 can be a final reconstructed frame that can be used for the prediction for encoding other frames (for example, the final reconstructed frame can be a frame of reference or the like). In some examples, the block forming and unblocking filtration analyzer module 117 and the quality restoring and filtration analyzer module 118 can be considered together as an encryption filtration subsystem 100.
[0048] In cryptograph 100, forecasting operations can
19/108 include interpretation and / or intra-forecast. As shown in FIGURE 1, interpretation can be performed by one or more modules, including the modified photo generation and modification analyzer module 120, synthesized photo synthesis and generation analyzer module 121 and feature and movement filtering predictor module 123 The modified photo generation and modification analyzer module 120 can analyze a current photo to determine parameters for changes in gain, changes in the dominant movement, changes in registration and changes in defocusing against a frame or frames of reference with which the it must be coded. The determined modification parameters can be quantized / de-quantized and used (for example, by the modified analyzer and modified photo generation module 120) to generate modified reference frames that can be used by the motion estimator module 122 to compute motion vectors for the effective motion-compensated forecast (and characteristics) of a forecast error. The synthesizer analyzer and synthesized photo generation module 121 can generate super-resolution (SR) photos and projected interpellation (PI) or motion-like photos to determine motion vectors for effective motion-compensated prediction in such frames .
[0049] The motion estimator module 122 can generate motion vector data based on the modified reference frame (s) and / or super-resolution photos (SR) and projected interpellation photos (PI) along with the forecast error. In some instances, the motion estimator module 122 can be considered an interpreter module. For example, motion vector data can be used for interpretation. If interpretation is applied, the feature and motion filtering predictor module 123 can apply motion compensation as part of the
10/208 local decoding as discussed.
[0050] Intraprevision can be performed through the intradirectional forecast and forecast generation 124 analyzer module. The intradirectional forecast and forecast generation 124 analyzer module can be configured to perform a spatial directional forecast and can use decoded neighboring partitions. In some examples, both direction determination and forecast generation can be performed by the intradirectional forecast and forecast generation 124 analyzer module. In some examples, the intradirectional forecast and forecast generation 124 analyzer module can be considered a intra-forecast module.
[0051] As shown in FIGURE 1, the prediction modes and reference types analyzer module 125 can allow the selection of forecast modes among, jump, auto, inter, division, multi and intra, for each forecast partition of a tile (or encoding unit or superfragment), all of which can be applied to P-photos and F-B / photos. In addition to the forecast modes, it also allows the selection of reference types that may be different depending on the inter or multi mode, as well as for P-photos and F / B photos. The forecast signal at the output of the predictor analyzer module and reference types 125 can be filtered by the forecast analyzer and forecast fusion filtration module 126. The forecast analyzer and forecast fusion filtration module 126 can determine the parameters (for example, filtration coefficients, frequency, suspension) of use for filtration and can perform filtration. In some examples, filtering the forecast signal may merge different types of signals that represent different modes (eg, intra, inter, multi, division, jump and auto). In some instances, intrapredictive signals may be different from all other types of predictive signal (s), so that proper filtration can accentuate
21/108 widely coding effectiveness. In some examples, the filtration parameters can be encrypted in bit stream 111 for use by a decoder. The filtered forecast signal can provide the second input (for example, forecast partition (s)) to differentiator 106, as discussed above, which can determine the forecast difference signal (for example, partition forecast error) for the coding discussed earlier. Furthermore, the same filtered forecast signal can provide the second input to adder 115, also as discussed above. As discussed, the output bit stream 111 can provide an effectively encoded bit stream for use by a decoder for displaying the video.
[0052] FIGURE 2 is an illustrative diagram of an exemplary next generation video decoder 200, arranged in accordance with at least some implementations of the present description. As shown, the decoder 200 can receive an input bit stream 201. In some examples, the input bit stream 201 can be encrypted through encryption 100 and / or through the encryption techniques discussed in this document. As shown, the input bit stream 201 can be received by an adaptive entropy decoder module 202. The adaptive entropy decoder module 202 can decode the various types of encrypted data (for example, suspended, motion vectors, transformation coefficients , etc.). In some examples, the adaptive entropy decoder 202 may use a variable length decoding technique. In some examples, the adaptive entropy decoder 202 may perform the reverse operation (s) of the adaptive entropy cryptograph module 110 discussed above.
[0053] The decoded data can be transmitted to the millstone
22/108 adaptive inverse quantization module 203. The adaptive inverse quantization module 203 can be configured to perform inverse scanning and quantized coefficients out of scale to determine transformation coefficients. Such an adaptive quantization operation can generate loss, for example. In some examples, the adaptive inverse quantization module 203 can be configured to perform the opposite operation of the adaptive quantization module 109 (for example, substantially the same operations as the adaptive inverse quantization module 112). As shown, transformation coefficients (and, in some examples, transformation data for use in a parametric transformation) can be transmitted to an adaptive reverse transformation module 204. The adaptive reverse transformation module 204 can perform a reverse transformation in transformation coefficients to generate residuals or residual values or partition prediction error data (or original data or wavelet data) associated with the coding partitions. In some examples, the adaptive reverse transformation module 204 can be configured to perform the opposite operation of the adaptive transformation module 108 (for example, substantially the same operations as the adaptive reverse transformation module 113). In some examples, the adaptive reverse transformation module 204 can perform a reverse transformation based on other previously decoded data, such as, for example, decoded neighboring partitions. In some examples, the adaptive inverse quantization module 203 and the adaptive inverse transformation module 204 can be considered together as a decoder transformation decoder subsystem 200.
[0054] As shown, residuals or residual values or partition prediction error data can be transmitted to the
23/108 Encoding Partition Assembler 205. Encoding Partition Assembler 205 can mount encoding partitions on decoded forecast partitions as needed (as shown in some examples, Encoding Partition Assembler 205 can be bypassed with switches 205a and 205b, so that the decoded forecast partitions may have been generated in the adaptive reverse transformation module 204). The forecast partitions decoded from the forecast error data (for example, the forecast partition residues) can be added to the predicted partitions (for example, the forecast pixel data) in the adder 206 to generate reconstructed forecast partitions. The reconstructed forecast partitions can be transmitted to the forecast partition builder 207. The forecast partition builder 207 can mount the reconstructed forecast partitions to generate reconstructed tiles, coding units or superfragments. In some examples, the encoding partition builder module 205 and the forecast partition builder module 207 can together be considered a non-partitioning subsystem of decoder 200.
[0055] The reconstructed tiles, coding units or superfragments can be transmitted to the unlocking filter module 208. The unlocking filter module 208 can unlock and delay the reconstructed tiles, coding units or superfragments (or forecast partitions of the tiles, coding units or superfragments). The generated unlock and delay filter parameters can be determined from the input bit stream 201, for example. The output of the unlocking filtration module 208 can be transmitted to a 209 quality restoration filtration module. The 209 quality restoration filtration module can apply filtration of quality
24/108 quality based on QR parameters, which can be determined from the input bit stream 201, for example. As shown in FIGURE 2, the output of the quality restoration filtration module 209 can be transmitted to the decoded photo temporary storage 210. In some examples, the output of the quality restoration filtration module 209 can be a final reconstructed frame which can be used for forecasting to encode other frames (for example, the final reconstructed frame can be a frame of reference or the like). In some examples, the unlock filter filtration module 208 and quality restoration filtration module 209 can be considered together as a decoder filtration subsystem 200.
[0056] As discussed, compensation due to forecasting operations may include inter-forecast and / or intra-forecast compensation. As shown, interpretation compensation can be performed through one or more modules including the modified photo generation module 211, synthesized photo generation module 212 and the predictive filtering module for characteristics and movement 213. The generation module modified photo 211 can use unquantified modification parameters (for example, determined from the input bit stream 201) to generate modified reference frames. The synthesized photo generation module 212 can generate super-resolution photos (SR) and projected interpellation photos (PI) or similar based on parameters determined from the input bit stream 201. If the interpretation is applied, the feature and motion compensated filtration predictor module 213 can apply motion compensation based on frames and motion vector or similar data received in the input bit stream 201.
[0057] Intra-forecast compensation can be performed through
25/108 of the intradirectional forecast generation module 214. The intradirectional forecast generation module 214 can be configured to perform spatial directional forecasting and can use neighboring partitions decoded according to intraprediction data in the input bit stream 201.
[0058] As shown in FIGURE 2, the forecast mode selector module 215 can determine a forecast mode selection among, jump, auto, inter, multi and intra, for each forecast partition of a tile or similar, all which can be applied to P-photos and F-B photos, based on the mode selection data in the input bit stream 201. In addition to the forecast modes, it also allows the selection of reference types that can be depending on the inter or multi mode, as well as for P-photos and F-B / photos. The forecast signal at the output of the forecast mode selector module 215 can be filtered through the forecast fusion filter module 216. The forecast fusion filter module 216 can perform filtering based on the parameters (for example, coefficients filtering, frequency, suspension) determined by input bit stream 201. In some examples, filtering the forecast signal can merge different types of signals representing different modes (eg, intra, inter, multi, hop and auto ). In some instances, intra-predictive signals may be different from all other types of predictive signal (s), just as proper filtration can greatly enhance the encoding frequency. The filtered forecast signal can provide the second input (e.g., forecast partition (s)) to differentiator 206, as discussed above.
[0059] As discussed, the output of the 209 quality restoration filtration module can be a final reconstructed frame. The final reconstructed frames can be transmitted to an adaptive photo reorder 26/108, which can reorder or rearrange the frames as needed based on order parameters in the input bit stream 201. The reordered frames can be transmitted to the module 218 post-content restorer. The 218 post-content restorer module can be an optional module configured to further improve the perceptual quality of the decoded video. Enhancement processing can be performed in response to quality enhancement parameters in the input bit stream 201, or it can be performed as a stand-alone operation. In some examples, the post-restorative content module 218 can be applied to parameters to improve quality, such as, for example, an estimate of film grain noise or reduction of residual blocking (for example, even after discussed in relation to the release filtering module 208). As shown, the decoder 200 can provide display video 219, which can be configured to be displayed via a display device (not shown).
[0060] As discussed, in some examples, the cryptograph 100 and / or decoder 200 may implement techniques related to adaptive partitioning to content for preview and encoding to encode the next generation video. In some examples, adaptive partitioning for content for forecasting can be performed through the forecasting partition generator module 105 of the encryption 100. In some examples, adaptive partitioning for content for encoding can be performed by the encoding partition generator module 107 of the encryption. cryptograph 100. In some examples, adaptive partitioning to the content for the forecast for the interprevision or similar can be performed by the forecast partition generator module 105 and adaptive partitioning to the content
27/108 for coding for interprevision or the like can be performed by the encryption generator 107 encoding partition module 100. In some examples, content adaptive partitioning for prediction / encoding (for example, only one partitioning layer) for intra-forecasting it can be performed by the forecast partition generator module 105 or the encryption partition generator module 107 of the encryption 100. In addition, in some examples, based on the encrypted forecast partitioning and / or encryption partitioning, the encryption module the encryption partition assembler 114 of the encryption 100 and / or the encoding partition partition assembler module 205 of the decoder 200 can mount the encoding partitions to form prediction partitions. In addition, in some instances, the forecast partition partition 116 of the encryption 100 and / or the forecast partition partition 207 of the decoder 200 can assemble the reconstructed forecast partitions to form tiles, superfragments, which can be assembled to generate frames or photos. As discussed, the various forecast partitions, encoding partitions or tiles, encoding units or superfragments can be used for interprevision, intraprevision, another enhancement of the encoding frequency or enhancements of image or video as discussed in this document.
[0061] Although FIGURES 1 and 2 illustrate specific encryption and decryption decoding modules, several other modules or components not shown can also be used in accordance with the present description. Furthermore, the present description is not limited to the specific components illustrated in FIGURES 1 and 2 and / or the way in which the various components are arranged. Various components of the systems described in this document can be deployed in the software, firmware and / or hardware and / or any combination thereof. For example, various components of encrypts
28/108 data 100 and / or decoder 200 can be provided, at least in part, through a System-on-a-Chip (SoC) computing hardware, as found in a computing system such as, for example, a mobile phone.
[0062] Furthermore, it is possible to recognize that the cryptograph 100 can be associated with and / or provided by a content provider system including, for example, a video content server system and that the output bit stream 111 can be transmitted or transported to the decoders, such as, for example, the decoder 200 through various components and / or communication systems, such as transceivers, antennas, network systems and the like not shown in FIGURES 1 and 2, it is still possible to recognize , that the decoder 200 may be associated with a client system, such as a computing device (for example, a desktop computer, laptop computer, tablet computer, convertible laptop computer, mobile phone, or the like ) that is remote from the encryption 100 and that receives the input bit stream 201 through various components and / or communication systems, such as as transceivers, antennas, network systems and the like not shown in FIGURES 1 and 2. Therefore, in various deployments, the cryptograph 100 and the decoder subsystem 200 can be deployed both together and independently of each other.
[0063] FIGURE 3 illustrates an exemplary video frame 310 that has exemplary tiles, encoding units or superfragments for partitioning, arranged in accordance with at least some of the implementations of the present description. Video frame 310 can include any video image, frame, photo or data or the like for encoding. In the illustrated example, video frame 310 includes a video frame from the Flor test sequence for pro
29/108 illustrative posts. As discussed, video frame 310 can be segmented into portions of frames (for example, tiles, encoding units or superfragments). The portions of frames can then be partitioned as will be discussed further below. Video frame 310 can be divided into portions of frames using any suitable technique or techniques. In some examples, video frame 310 can be divided into tiles 320-1 through 320-30 (in FIGURE 3, not all tiles are identified for clarity of presentation) across tile borders 330, so that tiles from 320-1 to 320-30 or similar encoding units can be the video frame portions for partitioning. Video frame 310 can include any number of tiles 320 and tiles 320 can be any size. In some examples, tiles 320 can be 64 x 64 pixels. In addition, tiles 320 can have any shape. In several examples, tiles 320 can be square or rectangular. In addition, as shown, tiles 320 can be of different shapes and sizes in frame 310. For example, tile 320-3 can be square and 64 x 64 pixels, tile 320-12 can be rectangular and 32 x 64 pixels, the 320-30 tile can be square and 32 x 32 pixels, and so on.
[0064] In other examples, the video frame portions for partitioning can be superfragments. For example, to determine superfragments, video frame 310 can be segmented into one or more layers of region. Segmentation can be performed with any precision (for example, pixel resolution) and quantized for any resolution based on the bit cost constraints. For example, segmentation can be performed with a precision of 4-pel, 8-pel or 16-pel (for example, a precision of 4 pixels, 8, pixels or 16 pixels) or similar.
10/30
Referring now to FIGURE 5, the segmentation of a video frame into region layers is illustrated.
[0065] FIGURE 5 illustrates the segmentation of the sample region layer of the video frame 310, arranged according to at least some implementations of the present description. As shown in FIGURE 5, video frame 310 can be segmented into one or more layers of region. In the example illustrated in FIGURE 5, the video frame 310 can be segmented into two region layers: the region layer 510 and the region layer 520. In FIGURE 5, the region layer 510 includes the video frame segments without a marking and the region layer 520 includes the video frame segments marked with a dot. For example, region layer 510 can represent the background portions of video frame 310 and region layer 520 can represent the front plane portions of video frame 310. In some examples, region layers can represent a front plane, a background, and a mid plane (or multiple mid planes) of a scene or the like. In some examples, video frame 310 may include a single region layer. In some examples, video frame 310 may include 3, 4, 5 or more region layers. In some examples, segmentation of the video frame 310 can be performed by the forecast partition generator module 105 (see FIGURE 1). In some examples, segmentation of the video frame 310 may be performed by another module (for example, a tile generation module, encoding unit or superfragment) inserted between the adaptive photo organizer 104 and the forecast partition generator 105 In some examples, segmentation of the video frame 310 can be performed by the adaptive photo organizer module 104 (or a tile generation module, encoding unit or superfragment of the adaptive photo organizer module 104, for example
31/108 example). Segmentation can be performed using any suitable technique or techniques. In some examples, segmentation may include encoding performed by symbol technique.
[0066] Furthermore, the region limits (for example, limits between the region layer 510 and the region layer 520 or similar) can be encoded for use in the encryption 100 and / or the decoder 200. The encoding limit region can be accomplished using any suitable technique or techniques. In some examples, the region boundary encoding may include an encoding performed by symbol technique. In some examples, region boundary coding may include generating a codebook that approximates region boundaries in a tile grid. For example, the tile grid (which may or may not correspond to tiles 320 of FIGURE 3) can be an equally spaced tile grid that is 32 x 32 pixels or 64 x 64 pixels or the like.
[0067] Again in relation to FIGURE 3, the region layer 510 and the region layer 520 are shown so that the region layer 520 is shown hiding the image collection of the video frame 310 and the region layer 510 is shown. shown not hiding the image collection of the 310 video frame. In addition, as discussed, the video frame can be divided into tiles 320-1 through 320-30. In some examples, the portions of frames for partitioning may include superfragments that include an individual region layer portion of a tile, as illustrated in FIGURE 4 [0068] FIGURE 4 illustrates exemplary superfragments 401 through 411 of a row of tiles 320-13 to 320-18 of the exemplary video frame 310, arranged in accordance with at least some implementations of the present description. As shown, superfragments 401 through 411 can include portions of a tile in
32/108 a region layer. For example, superfragment 401 may include tile portion 320-13 in region layer 510, superfragment 402 may include tile portion 320-13 in region layer 520, superfragment 403 may include tile portion 320- 14 in region layer 510, and so on. As shown, a superfragment can have substantially any shape and size (limited only by the accuracy of the segmentation operation described). For example, superfragments 403, 410 and 411 can be the totality of tiles 320-14, 320-17 and 320-18, respectively, such as superfragments can have the same shape as their respective tiles. Furthermore, superfragments 401 and 404 to 409 illustrate a variety of possible formats although many others are also possible. In addition, superfragment 404 illustrates that a superfragment need not be contiguous.
[0069] FIGURES 6 (A) and 6 (B) illustrate an exemplary video frame 610 segmented into layers of region 620, 630, 640 and partitioned according to tiles 650a to 650d in superfragments 661 to 669, arranged according to at least some deployments of the present description. As shown in FIGURE 6 (A) and as discussed above, video frame 610 (a portion of a video frame is illustrated for clarity of presentation) can be segmented into region layers 620, 630, 640 as discussed above with the use of any suitable technique, such as, for example, a coding performed by symbol using the partition generator module 105 or the adaptive photo organizer module
104. In some instances, the region layers 620, 630, 640 may represent a front plane, a background and a midplane of a scene or the like. As shown in FIGURE 6 (B), the region layers 620, 630, 640 can be overlapped or combined or similar with tiles 650a to 650d, which can be
33/108 defined in relation to video frame 610 as described above (for example, video frame 610 can be divided into tiles 650a to 650d), to define superfragments 661 to 669 so that each of superfragments 661 to 669 can include portions of a tile in one or within a region layer.
[0070] For example, superfragment 661 may include portion of tile 650a in region layer 620 or within it, superfragment 662 may include portion of tile 650a in region layer 630 or within it, superfragment 663 may include the portion of the tile 650b in the region layer 620 or within it (for example, the entire tile 650b), the superfragment 664 can include the portion of the tile 650c in the region layer 630 or within it, the superfragment 665 can include the portion of tile 650c in region layer 620 or within it, superfragment 666 may include portion of tile 650c in region layer 640 or within it, superfragment 667 may include portion of tile 650d in region layer 630 or within, superfragment 668 can include tile portion 650d in region layer 620 or within it and superfragment 669 can include tile portion 650d in region layer 640 or within it. It is observed that in FIGURE 6 (B), the superfragment limits are defined both by solid lines that represent the limits of the region layer and by dotted lines that represent the limits of the tile.
[0071] As discussed, portions of frames can be defined by dividing a video frame into tiles or encoding unit or defining superfragments. In some examples, the use of tiles can offer the advantages of simplicity while the superfragments can be more complex, but they do offer the advantage of sharp interpreter or intraprevision or intensification of
34/108 image. Either way, portions of frames can be partitioned as discussed in this document.
[0072] As discussed further below, segmentation of the video frame 310 into tiles or encoding unit or superfragments can be performed through the forecast partition generator module 105 (or a tile generation module, encoding unit or superfragment) of the forecast partition generator module 105, for example), another module (for example, a tile generation module, coding unit or superfragment) inserted between the adaptive photo organizer 104 and the forecast partition generator 105 or via of the adaptive photo organizer module 104 (or a tile generation module, encoding unit or superfragment of the adaptive photo organizer module 104, for example).
[0073] FIGURE 7 is a flowchart showing a subset of an example 700 coding process, arranged according to at least some implementations of the present description. Process 700 can include one or more operations, functions or actions as illustrated by one or more of operations 702, 704, 706 and / or 708. Process 700 can form at least one part of the next video encoding process generation. As a non-limiting example, process 700 may form at least part of a next generation video encoding process as performed by the encryption system 100 of FIGURE 1.
[0074] Process 700 can be started in operation 702, Receive Video, in which a video frame can be received. The video frame can be any video frame as discussed in this document [0075] Process 700 can continue in operation 704, Segment the Video Frame into Tiles, Encoding Units or Su
35/108 perfragments, in which the video frame can be segmented into tiles, encoding units or superfragments using any / any techniques / techniques as discussed in this document.
[0076] Process 700 can continue in operation 706, Determine a chosen Prediction Partitioning Technique (for example, kd Tree Partitioning or Binary Tree) for a Tile, Coding Unit or Superfragment, where the chosen partitioning technique can be determined for a frame portion (for example, a tile, coding unit or superfragment). In some examples, the partitioning technique chosen may be based, at least in part, on a type of photo of the video frame. As discussed, in some examples, the partitioning technique for the frame portion may be a structured partitioning technique chosen from binary tree partitioning or k-d tree partitioning. For example, a technical partitioning structure can provide partitioning that results in organized data that can be effectively encrypted through a bit stream. For example, binary tree partitioning or kd tree partitioning can provide bit encoding that includes a 0 for no cut and a 1 for a cut, followed by a 0 or 1 that indicates a horizontal or vertical cut and that repeats the pattern for the cuts until a termination (for example, a non-cut). Such structured partitioning can also be coded effectively using codebook techniques as discussed in this document.
[0077] As discussed, in some examples, the chosen partitioning technique can be determined based on a type of photo in the video frame. For example, as discussed, a framework
36/108 of video can be a photo-l, a photo-P or a photo-F / B. In some examples, for video frames that are photo-l, the chosen partitioning technique may be the k-d tree partitioning technique. In addition, in such examples, the frame portion may comprise a tile size of 64 x 64 pixels or a superfragment determined based on a tile size of 64 x 64 pixels. In some examples, for video frames that are P-photos, the partitioning technique chosen may be the binary tree partitioning technique. Furthermore, in such examples, the frame portion may comprise a tile size of 32 x 32 pixels or a superfragment determined based on a tile size of 32 x 32 pixels for low-resolution P-photos or the frame portion may understand a tile size of 64 x 64 pixels or a superfragment determined based on a tile size of 64 x 64 pixels for high resolution P-photos. In some examples, for video frames that are F-photos, the partitioning technique chosen may be the binary tree partitioning technique. In addition, in such examples, the frame portion may comprise a tile size of 64 x 64 pixels or a superfragment determined based on a tile size of 64 x 64 pixels.
[0078] In other examples, the partitioning technique chosen may be based, at least in part, on a characteristic of the tile, coding unit or superfragment (for example, the frame portion). For example, the feature may include an expected amount of intrablocks in at least one frame portion. The expected amount of intrablocks can be determined using the forecast partition generator 105 or encryption controller 103, for example. In some examples, the partitioning technique chosen may be the tree partitioning technique
37/108 k-d when the expected amount of intrablocks is greater than a threshold and the chosen partitioning technique may include the binary tree partitioning technique when the expected amount of intrablocks is less than a threshold.
[0079] Process 700 can continue in operation 708, Partition the Tile, Coding Unit or Superfragment into a Plurality of Potential Forecasting Partitions Using the Chosen Forecasting Partitioning Technique, in which the frame portion can be partitioned into a plurality of partitions using the chosen partitioning technique. For example, the frame portion (for example, a tile, coding unit or a superfragment) can be partitioned using the chosen partitioning technique (for example, chosen from kd tree partitioning or binary tree partitioning) . For example, the frame portion can be partitioned through the forecast partition generator module 105 of the cryptograph 100. For example, the frame portion can be partitioned into a plurality of potential forecast partitions, which can be evaluated by an optimization of rate distortion or the like to select selected forecast partitions, which can be used for the encoding discussed in this document. In addition, the selected forecast partitions (for example, a selected partitioning) can be represented as discussed in this document by encoding partitioning indicators or code words, which can be encoded in a bit stream, such as the bit stream 111 or bit stream 1000 or the like.
[0080] As discussed above, depending, for example, on a type of picture in a frame or a feature on a frame portion, the frame portion can be partitioned using a chosen partitioning technique that is chosen from among I participated
38/108 binary tree linking or k-d tree partitioning. In addition, as discussed above, the frame portion can be a tile, coding unit or a superfragment or the like and the frame portion can have substantially any shape (for example, particularly when superfragments are employed). For the sake of clarity of presentation, the binary tree partitioning and the k-d tree partitioning will be described in relation to the tile, coding unit or square superfragment. However, the techniques described can be applied to any tile format, coding unit or superfragment. In some examples, the partitioning described can be partitioned to a smaller allowable size, such as, for example, 4x4 pixels or the like.
[0081] FIGURE 8 illustrates an example partitioning of a frame portion 800 using a binary tree partitioning technique, arranged according to at least some of the implementations of the present description. As shown, in some examples, the frame portion 800 may include a square shape. As discussed, in several examples, the frame portion 800 can include any suitable format. In addition, the frame portion 800 may include a tile, coding unit or superfragment or the like as discussed in this document. In addition, in some instances, the frame portion 800 may, by itself, be a partition so that the illustrated partitions can be considered subpartitions. Such examples can occur when partitions are further partitioned for encoding (for example, transformation encoding) through the subpartition generator module 107 as will be further discussed in this document below.
[0082] As shown in FIGURE 8, the binary tree partitioning can include a partitioning progression. The co
39/108 measure by frame portion 800, a partition 1 can be defined as frame portion 800 itself. Partition 1 can be partitioned vertically into two partitions 2, 3. Each of partitions 2, 3 can be additionally partitioned, this time, vertically into partitions 4, 5 (for example, partitions from partition 3) and partitions 6, 7 (for example, the partitions of partition 2). The second row (from the top) of FIGURE 8 illustrates an additional vertical partitioning of partition 3 on partitions 8, 9 and the additional vertical partitioning of partition 2 on partitions 10, 11. The third row (from the top) of FIGURE 8 illustrates horizontal partitioning (for example, instead of vertical partitioning of the first row (from the top)) to generate partitions 12, 13 from partition 1. The third row (from the top) of FIGURE 8 also illustrates the additional vertical partitioning of partitions 12, 13 to generate partitions 14, 15 (for example, the partitions of partition 13) and partitions 16, 17 (for example, from the partitions of partition 12). The fourth row or bottom row illustrates additional horizontal partitioning of partition 12 to generate partitions 18, 19 and partition 13 to generate partitions 20, 21. As illustrated, binary tree partitioning can be used recursively, one dimension per time (for example, horizontally and vertically) to subdivide or partition each partition into two equal partitions until the smallest partition size can be achieved. Binary tree partitioning can partition a frame portion into a wide range of combinations and can provide smooth progression of partitions.
[0083] FIGURE 9 illustrates an example partitioning of a frame portion 900 using the k-d tree partitioning technique, arranged according to at least some implementations of the present description. As shown, in some examples, the frame portion 900 may include a square shape. According
40/108 discussed, in several examples, frame portion 900 can include any suitable format. In addition, the frame portion 900 may include a tile, coding unit or superfragment or the like as discussed in this document.
[0084] As shown in FIGURE 9, the k-d tree partitioning can include a partitioning progression. Furthermore, as illustrated, the kd tree partitioning can be a superset of binary tree partitioning so that rows 1 to 4 of FIGURE 9 (starting from the top of FIGURE 9) can correspond to rows 1 to 4 of FIGURE 8. In some examples, the kd tree partitioning process illustrated in FIGURE 9 can iteratively divide the frame portion 900 into four rectangular partitions in a specific dimension (for example, vertical or horizontal). Starting with frame portion 900, a partition 1 can be defined with frame portion 900 itself. Partition 1 can be partitioned vertically into two partitions 2, 3. Each of partitions 2, 3 can be further partitioned, this time vertically, into partitions 4, 5 (for example, partitions of partition 3) and partitions 6, 7 (for example, partitions from partition 2). The second row (from the top) of FIGURE 9 illustrates the additional vertical partitioning of partition 3 on partitions 8, 9 and the additional vertical partitioning of partition 2 on partitions 10, 11. The third row (from the top) of FIGURE 9 illustrates horizontal partitioning (for example, instead of vertical partitioning of the first row (from the top)) to generate partitions 12, 13 from partition 1. The third row (from the top) of FIGURE 9 also illustrates the additional vertical partitioning of partitions 12, 13 to generate partitions 14, 15 (for example, partitions of partition 13) and partitions 16, 17 (for example, from partitions of partition 12). The fourth row (from the top) illustrates the additional horizontal partitioning of partition 12 to generate
41/108 partitions 18, 19 and partition 13 to generate partitions 20, 21. [0085] As discussed, through the fourth row, the k-d tree partitioning can substantially correspond to the binary tree partitioning. As illustrated in the fifth row (from the top) of FIGURE 9, the frame portion 900 can be partitioned into partitions with% and% size vertically to generate partitions 22, 23. In addition, partition 23 can be partitioned in half vertically to generate partitions 24, 25 and partition 22 can be partitioned in half vertically to form partitions 26, 27. As shown in the sixth row or the bottom row of FIGURE 9, frame portion 900 can be partitioned into partitions with% and% sizes horizontally to generate partitions 28, 29. In addition, partition 28 can be partitioned in half horizontally to generate partitions 30, 31 and partition 29 can be partitioned in half horizontally to form partitions 32, 33 Such partitioning process can be repeated recursively, alternating dimensions (for example, horizontal and vertical) to subdivide or partition each partition into 2 parts equal (halves) and 2 unequal parts (for example, at a ratio of 1: 3) until the smallest partition size can be achieved. The k-d tree partitioning can partition a frame portion into a wide range of combinations, not only at a midpoint of partitions and subpartitions (and so on), but also with additional precision at each access. In the illustrated example, a one-quarter precision is used. In other examples, any precision can be used, such as one-third or one-fifth or the like can be used.
[0086] As discussed in relation to FIGURES 8 and 9, portions of frames can be partitioned into a wide range of partitions. Each of the partitions can be indexed with a value of
42/108 index and transmitted to the encryption controller 103 (see FIGURE 1). Indexed and transmitted partitions can include hundreds of partitions, for example. One or more partitions can be used as discussed in relation to FIGURE 1 for prediction and / or encoding (for example, transformation coding; in some examples, partitions for encoding can be additionally partitioned into subpartitions). For example, l-photos can be predicted completely using intra-preview, P-photos can use an inter-preview and intra-preview, although inter-preview can be the primary source of forecast for P-photos and F / B photos they can also use both inter- and intra-forecasting. For example, the cryptographic controller 103 can select the partitions for use in the inter-forecast and intra-forecast operations. The data associated with the inter-forecast and / or intra-forecast and the data that define the partitions used can be encrypted in a bit stream, for example, as is further discussed, below in this document.
[0087] In some examples, the wide range of partitioning options can be limited or constrained. Such a constraint can be applied to both the binary and k-d tree partitioning examples. For example, partitioning the frame portion (for example, the tile, coding unit or superfragment) may include predefining a first partition as a halving frame portion in a first dimension (for example, horizontal or vertical) and predefine a second partition that halves at least a portion of the frame in a second dimension (for example, the opposite of the first halving). Additional partitions can be produced only after such initial constrained partitioning, for example, so that other optional partitions based on the initial frame portion are not
43/108 more available. Such constraints may allow you to start with 64 x 64 pixel frame portions and divide the frame portion into 32 x 32 size subportions and then partitioning each subportion through kd and binary tree partitioning, which can limit or the number of partitions.
[0088] The forecast partitions and / or encryption partitions can be defined (for example, their format and / or location can be defined) for use through the encryption and / or decoder. In some examples, an individual forecast partition of a frame portion or an encoding partition of a forecast partition can be defined using the encoding performed by symbol based on pixel blocks. In other examples, an individual forecast partition of a frame portion or an encoding partition of a forecast partition can be defined using a codebook. Table 1 illustrates an example of binary tree partitioning codebooks with 32 x 32 pixel fragment sizes for use in defining a tile partition, coding unit or superfragment. In table 1, the large Xs represent partitions with no termination.
index Format Number of Partitions 01 1 J 2 2 2
44/108
3 I III 3 4 I 'x x I 3 5 'WawwvwvwvwvwvwvwvAxx ·' 3 63 7 3 8 . <'· $: 3:: a · £ 1 · 3 9 X ·. ·. ·. ·. ·. ·. ·. ·. ·. ·. ·. ·. ·. ·. ·, ------------ 3 1011 7 i 3
TABLE 1: EXAMPLE OF BINARY TREE PARTITIONING CODE BOOK ENTRIES [0089] Table 2 illustrates an example of the k-d tree partitioning code book of fragment size entries.
45/108 x 32 pixels for use in defining a tile partition, coding unit or superfragment.
index FormatNumber of partitions 0 1 1 ^ • .xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx ^ 2 2 i I 2 3 <...............................Sssssssssssssssssssssssssssssssss · ^ 2 42 5 2 6 XXXXXXXXXXXXXXXXXXXXXXX «^ XXXXXXXX 2 7 ssssssssssssssssssssssssssssssss ^ 3
46/108
83 93 10 ------------------S----------------- 3 113 123
TABLE 2: EXAMPLE OF K-D TREE PARTITIONING CODE BOOK ENTRIES [0090] Tables 1 and 2 show only exemplary code book entries. A complete entry code book can provide a complete or substantially complete listing of all possible entries and their encodings. In some examples, the codebook may consider contractions as described above. In some examples, the data associated with a codebook entry for a partition (or subpartition) can be encrypted in a bit stream for use in a decoder as discussed in this document.
[0091] As discussed, portions of frames (for example, tiles, encoding units or superfragments) can be partitioned based on a chosen partitioning technique
47/108 (for example, binary tree partitioning or k-d tree partitioning) to generate forecast partitions. Prediction partitions can be used for encryption based on inter-forecasting techniques and / or intra-forecasting techniques. A local decoding cycle deployed through the encryption can generate predicted partitions, which can be used to generate prediction or residual error data partitions (for example, differences between predicted partitions and original pixel data). In some cases, the forecast error data partitions associated with the forecast partitions can be coded and can therefore be described as forecast partitions or coding partitions substantially interchangeably. Such cases can occur in the context of intra-forecasting in l-photos (or, in some deployments in the context of intra-forecasting in P-photos and F-B / photos), for example. In other cases (for example, in P-photos and F-B / photos), prediction error data partitions can be evaluated to determine whether they need to be encrypted and, if necessary, the associated partitions can be additionally partitioned into partitions coding for coding. In addition, forecasting partitions and / or coding partitions can be characterized or defined using the coding performed by symbol or a codebook or similar. In addition, as discussed, the data associated with the described forecast partitions and / or encryption partitions, the forecast data and so on can be encrypted (for example, through an entropy encryption) in a bit stream. The bit stream can be communicated to a decoder, which can use the encoded bit stream to decode video frames for display.
[0092] FIGURE 10 illustrates an example of 1000 bit stream, arranged according to at least some implementations of the present description. In some examples, bit stream 1000 may color
48/108 respond to output bit stream 111 as shown in FIGURE 1 and / or input bit stream 201 as shown in FIGURE 2. Although not shown in FIGURE 10 for clarity of presentation, in some examples bit stream 1000 can include a header portion and a data portion. In several examples, bit stream 1000 may include data, indicators, code words, index values, mode selection data, reference type data or the like associated with the encryption of a video frame as discussed in this document. As shown, in some examples, bit stream 1000 may include an encoding partition that defines data 1010, the error data partition of encrypted forecast data 1020 (for example, forecast error data that has been encoded by transformation and quantized), forecast partition that defines data 1030, interpretation data 1040, forecast partition that defines data 1050 and / or intra-forecast data 1060. The illustrated data can be included in any order in bit stream 1000 and can be adjacent or separated by any other among a variety of additional data for encoding the video.
[0093] For example, the encoding partition that defines 1010 data may include data associated with the encoding partitions defined through the encoding partition generator 107. For example, the encoding partition that defines 1010 data may include data associated with the definition of binary tree portions using a coding technique performed by symbol or codebook or the like such as coding partition indicators or codewords or the like. Furthermore, the coding partition that defines data 1010 can be associated with the error data partition of encrypted forecast data 1020 generated through adaptive transformation module 108 and or encryption module
49/108 of adaptive entropy 110. The encrypted data from the forecast error data partition 1020 (for example, transformation coefficients or the like) can include quantized and encoded residual data by transformation. The encrypted data from the prediction error data partition 1020 can be transmitted via bit stream 1000 for decoding.
[0094] Forecast partition definition data 1030 may include data associated with forecast partitions defined through forecast partition generator 105, for example. The prediction partition definition data 1030 can include data associated with the definition of both binary tree portions and k-d tree partitions, as discussed in this document. The prediction partition definition data 1030 can define partitions associated with the inter-forecast or intra-forecast data data in several examples. For example, the prediction partition definition data 1030 may include data associated with the definition of binary tree portions or kd tree partitions using a coding technique performed by symbol or codebook or similar, such as forecast partition indicators or code words or similar. In some examples, the forecast partition definition data 1030 may be associated with the forecast data 1040 so that the forecast data 1040 can be configured to provide forecast (for example, motion compensation or the like) for the partition defined by partition definition data 1030.
[0095] Forecast partition definition data 1050 can include data associated with forecast partitions defined through forecast partition generator 105, for example. The prediction partition definition data 1050 can include data associated with the definition of both the binary tree portions and the
50/108 k-d tree, as discussed in this document. In some examples, forecast partition definition data 1030 and forecast partition definition data 1050 can define forecast partitions in the same frame. In other examples, forecast partition definition data 1030 and forecast partition definition data 1050 can define partitions in different frames. The forecast partition definition data 1050 can define the forecast partitions associated with the inter-forecast or intra-forecast data data in several examples. For example, the prediction partition definition data 1050 may include data associated with the definition of the binary tree portions or k-d tree partitions using a coding technique performed by symbol or codebook or similar. In some examples, forecast partition definition data 1050 can be associated with intraprediction data 1060, so that intraprediction data 1060 can be configured to provide intraprediction for the forecast partition defined by the forecast partition definition data 1050. In some examples, the 1060 intraprediction data can be generated by the forecast analyzer and forecast fusion filtration module 124 or similar.
[0096] As discussed, bit stream 1000 can be generated by an encryption such as, for example, encryption 100 and / or received by a decoder 200 for decoding, so that the video frames can be presented via a display device.
[0097] FIGURE 11 is a flowchart showing an exemplary 1100 decoding process, arranged in accordance with at least some implementations of the present description. Process 1100 may include one or more operations, functions or actions, as illustrated by one or more of operations 1102, 1104, 1106, 1108, 1109,
51/108
1110, 1112 and / or 1114. The 1100 process can form at least part of a next generation video encoding process. A non-limiting example title, process 1100 can form at least part of a next generation video decoding process as performed by the decoder system 200 of FIGURE 2.
[0098] Process 1100 can be started in operation 1102, Receive Encrypted Bitstream, in which a bitstream can be received. For example, an encrypted bit stream as discussed in this document can be received in a video decoder. In some examples, bit stream 1000 may be received through decoder 200.
[0099] Process 1100 can continue in operation 1104, Decode the Encrypted Bit Chain by Entropy to Determine Forecast Coding Indicators, Forecast Modes, Forecast Reference Types, Forecast Parameters, Motion Vectors, Indicator (es) ) of Coding Partition, Block Size Data, Transformation Type Data, Quantizer (Qp) and Quantized Transformation Coefficients, where the bit stream can be decoded to determine the coding partition indicators (or code words) ), block size data, transformation type data, quantizer (Qp) and quantized transformation coefficients. For example, decoded data can include data associated with an encoding partition (for example, transformation coefficients) and one or more indicators associated with the encoding partition. For example, the transformation coefficients can be for a fixed transformation or an adaptive transformation to the content as discussed in this document. Transformation type data can indicate a transformation type for the encoding partition, a parametric transfer direction
52/108 mation (for example, for hybrid parametric transformations) and / or a transformation mode (for example, xmmode; used only for intra-coding signals between mode choices to use a forecast difference signal or an original signal). In some examples, decoding can be performed by the adaptive entropy decoder module 202. In some examples, determining the transformation coefficients can also involve an inverse quantization operation. In some examples, the inverse quantization operation can be performed by the adaptive inverse quantization module 203.
[00100] In some examples, the bit stream can be decoded by entropy to determine the interpretation data associated with a first individual forecast partition, the data defining the first individual forecast partition, the intra forecast data associated with the second partition individual forecast, the data that defines the second individual forecast partition, and the data associated with the individual forecast error data partition. For example, the first individual forecast partition can include a binary tree partition and the second individual forecast partition can include a k-d tree partition as discussed in this document. In some examples, the bit stream can be decoded by entropy to further determine the data associated with an individual prediction error data partition as discussed in this document.
[00101] Process 1100 can continue in operation 1106, Apply Quantizer (Qp) to Quantized Coefficients to Generate Inverse Quantized Transformation Coefficients, where the quantizer (Qp) can be applied to the quantized transformation coefficients to generate the quantized transformation coefficients inversely. For example, operation 1106 can be applied through the
53/108 adaptive inverse quantization module 203. For example, the data associated with the individual forecast error data partition can be inversely quantized to generate decoded coefficients.
[00102] Process 1100 can continue in operation 1108, In Each Decoded Block of the Coefficients in a Coding Partition (or Unplanned) Perform the Reverse Transformation based on the Transformation Type and Block Size Data to Generate Forecast Error Partitions Decoded, in which, in each decoding block of the transformation coefficients in a coding (or interpredicted) partition, an inverse transformation based on the transformation type and block size data can be performed to generate forecast error partitions decoded. In some examples, the reverse transformation can include an adaptive transformation of inverse size. In some examples, the reverse transformation can include a fixed reverse transformation. In some examples, the reverse transformation may include a content-adaptive reverse transformation. In such examples, performing the content-adaptive reverse transformation may include determining base functions associated with content-adaptive reverse transformation based on a neighboring block of decoded video data. Any forward transformation used to encrypt as discussed in this document can be used to decode using an associated reverse transformation. In some examples, the reverse transformation can be performed by the adaptive reverse transformation module 204. In some examples, generating the decoded prediction error partitions may also include mounting the decoded encoding partitions via the encoding partition assembler 205.
[00103] For example, data associated with the individual forecast error data partition (for example, decoded coefficients)
54/108 can be transformed inversely to generate a prediction error data partition (for example, error data for a prediction partition) or decoded encoding partitions that can be combined or mounted to generate a prediction error data partition forecast. In some examples, a reverse quantization and a reverse transformation can be performed based on the data associated with the individual prediction error data partition to generate decoded encoding partitions (for example, the decoded encoding partitions of the error data partition). forecast). In some examples, the encoding partitions can be encoding partitions and binary tree of the prediction error data partition as discussed.
[00104] Process 1100 can continue in operation 1109, Using Forecast Partition Indicators, Forecast Modes, Forecast Reference Types and Motion Vectors Generates Forecast Partitions, where the forecast forecast partition, forecast modes, forecast reference types and motion vectors can be used (along with forecast reference photos) to generate predicted partitions (for example, pixel data for or associated with forecast partitions).
[00105] Process 1100 can continue in operation 1110, Add Predicted Partitions Corresponding to Decoded Forecast Error Data Partitions to Generate Rebuilt Partitions, where predicted (decoded) partitions can be added to forecast error data partitions decoded to generate the reconstructed forecast partitions. For example, the decoded prediction error data partition can be added to the associated predicted partition using the 206 adder. In some instances, a reconstructed partition can be generated by performing the inter-forecast or intra-forecast (for example, a decoded partition an
55/108 later or similar can be used through inter-forecast or intra-forecast to generate a predicted partition). In some instances, motion compensation can be performed to generate a predicted individual partition decoded based on the interprevision data (e.g. motion vectors) decoded from the bit stream. In some examples, intraprevision can be performed for an individual predicted partition decoded based on the intraprevision data decoded from the bit stream.
[00106] Process 1100 can continue in operation 1112, Mount Reconstructed Partitions to Generate a Tile, Coding Unit or Superfragment, where the reconstructed forecast partitions can be mounted to generate tiles, coding units or superfragments. For example, the reconstructed forecast partitions can be assembled to generate tiles, coding units or superfragments using the forecast partition assembler module 207.
[00107] Process 1100 can continue in operation 1114, Mounting Tiles, Encoding Units or Superfragments of a Photo to Generate a Completely Decoded Photo, in which the tiles, encoding units or superfragments of a photo can be mounted to generate a photo completely decoded. For example, after the optional unlock filter and / or quality restoration filtration, tiles, encoding units or superfragments can be assembled to generate a fully decoded photo, which can be stored via temporary decoded photo storage 210 and / or transmitted to the presentation via a display device after processing via the adaptive photo reorganizer module 217 and post-restorative content module 218.
[00108] Various components of the systems described in this document
56/108 can be deployed in software, firmware and / or hardware and / or any combination thereof. For example, several components of system 300 can be provided, at least in part, through the hardware of a System-on-a-Chip (SoC) computing, as can be found in a computing system such as, for example , a smart phone. Those skilled in the art may recognize that the systems described in this document may include additional components that have not been represented in the corresponding FIGURES. For example, the systems discussed in this document may include additional components, such as a bitstream multiplexer or demultiplexer module and the like that have not been represented for clarity.
[00109] FIGURES 12 (A) and 12 (B) are illustrative diagrams of exemplary encoder subsystems 1200 and 1210, arranged according to at least some implementations of the present description. In some examples, the encryption subsystem 1200 or 1210 can be deployed via encryption 100 as shown in FIGURE 1. As shown in FIGURE 12 (A), the encryption subsystem 1200 may include the forecast partition generator module 105, as discussed above. As shown, in some examples, the forecast partition generator module 105 may include a tile generation module, coding unit or superfragment 1201, which can generate tiles or coding unit or superfragments as discussed in this document.
[00110] As shown in FIGURE 12 (B), the encryption subsystem 1210 may include a separate tile generation module, encoding unit or superfragment 1201 implanted between adaptive photo organizer 104 (which may not be considered a part of encryption subsystem 1210 in some deployments) and the forecast partition generator 105. In others
57/108 examples, the separate tile generation module, encoding unit or superfragment 1201 can be deployed via adaptive photo organizer 104 and adaptive photo organizer 104 can be considered a part of the encryption subsystem. [00111] As discussed, in some examples, superfragments can be encoded. In some examples, the superfragments can be encoded by encoding performed by a symbol that can use a correlation between neighboring blocks along a one-dimensional scanning device (1D), due to the fact that, probably, neighboring blocks belong to the same region. In other examples, a codebook can be used to approximate the limits of a portion of a frame on an equally spaced or substantially equally spaced tile grid of 32 x 32 pixels or 64 x 64 pixels or similar. In such examples, the main limits across each tile can be approximated with a pattern closest to a codebook available and a code that matches the pattern can be included in a bit stream for use through a decoder. In some examples, such limit representations can generate loss in order to minimize the bit bust.
[00112] In various deployments, portions of frames (eg tiles, coding units or superfragments) can be generated through or transmitted to the forecast partition generator module 105, which may include binary tree partition generator module 1202 and kd tree partition generator module 1203. As shown, the frame portions can be inserted either in the binary tree partition generator module 1202 or in the kd tree partition generator module 1203, depending on the operation of switches 1204, 1205. In some examples, switches 1204, 1205 may operate based on a type of frame photo
58/108 of the portions of frames. For example, if the frame is a photo-l, the received frame portion can be inserted into the kd 1203 tree partition generator module via switches 1204, 1205. If the frame is a P-photo or F-B photo , the received frame portion can be inserted into the binary tree partition generator module 1202 via switches 1204, 1205, for example. In other examples, switches 1204, 1205 can operate based on a characteristic of the received frame portion. For example, if the expected amount of intrablocks from the frame portion is greater than a threshold, the frame portion can be inserted into the kd 1203 tree partition generator module and if the expected amount of intrablocks from the frame portion is less than the threshold, the frame portion can be inserted into the binary tree partition generator module 1202. In several examples, the threshold can be predefined or heuristically determined or the like.
[00113] As shown, the output of the forecast partition generator module 105 (as controlled via switch 1205) can be inserted in the differentiator 106, where processing can continue as discussed above in relation to FIGURE 1, so that the second entry in the differentiator 106 is the output of the forecast fusion analyzer and forecast fusion filtration module 126 and so that the output of the differentiator 106 (for example, forecast error or residual data partitions or similar) may optionally be inserted into the encoding partition generator 107 as controlled via switches 107a, 107b as discussed in this document. In some examples, for interpretation in the F-B or P-photos, the prediction error data partitions can be transmitted to the encoding partition generator module 107 for further partitioning into encoding partitions (for example, via of a partitioning technique
59/108 binary tree). In some examples, for intra-preview in the l-photos, the prediction error data partitions (or original pixel data) can be bypassed from the 107 encoding partition generator module, so that no additional partitioning is performed before encoding by transformation (for example, through adaptive transformation module 108). In such examples, portions of frames (for example, tiles or coding unit or superfragments) can be partitioned only once and such partitions can be described as forecast partitions or coding partitions or both, depending on the context. For example, as an output from the forecast partition generator, such partitions can be considered forecast partitions (due to the fact that they are used for forecasting) while in adaptive transformation module 108, such partitions can be considered encoding partitions (due to the fact that such partitions are coded by transformation). [00114] FIGURE 13 is an illustrative diagram of an exemplary decoder subsystem 1300, arranged according to at least some implementations of the present description. In some examples, the decoder subsystem 1300 can be deployed via decoder 200 as shown in FIGURE 2. As shown in FIGURE 13, the decoder subsystem 1300 can include the encoding partition assembler module 205, which can optionally receive , an input of the adaptive reverse transformation module 204 (not shown, see FIGURE 2) as controlled via switches 205a, 205b. The output from the 205 encoding partition builder module or bypassed data (for example, forecast error data partitions) can be provided as an input to the adder 206. As discussed in some examples, the data partitions from forecasting errors may have been encrypted by transformation without a pair
60/108 additional tagging (for example, in the intra-preview of the l-photos) and the 205 encoding partition assembler module can be bypassed and, in some examples, the prediction error data partitions may have been additionally partitioned into partitions encoding partitions for transformation encoding and the encoding partition assembler module 205 can mount such encoding partitions on prediction error data partitions.
[00115] The second entry to adder 206 (for example, the decoded forecast partitions) can be provided from the output of the forecast fusion filtration module 216, as discussed above in relation to FIGURE 2. As shown, the subsystem decoder 1300 can also include the prediction partition builder module 207, which can include the binary tree portion builder module 1301 and the kd tree partition builder module 1302. The output of the adder 206 (for example, the partitions reconstructed prediction modules) can be inserted either in the binary tree portion assembler module 1301 or in the kd 1302 tree partition assembly module based on the control of switches 1304, 1305. For example, the binary tree portion can be inserted in the module binary tree portion assembler 1301 for mounting in frame portions and the kd tree partitions can be inserted in the k-d 1302 tree partition assembly module for mounting in portions of frames (for example, according to the type of partitioning performed on the encryption).
[00116] In addition, as shown in some examples, the forecast partition assembler module 207 may include a tile assembler module, coding unit or superfragment 1303, which can be configured to assemble the assembled frame portions (for example, tiles, encoding units or superfragments) in video frames. The video frames of the
61/108 partition assembler module 207 can be inserted into the unlock filtering module 208 (not shown, see FIGURE 2) for further processing as discussed in this document. In other examples, the tile assembler module, coding unit or superfragment 1303 can be deployed separately between the forecast partition assembler module 207 and the unlock filtering module 208 (see FIGURE 2).
[00117] Some additional and / or alternative details in relation to process 700, 1100 and other processes discussed in this document can be illustrated in one or more examples of the deployments discussed in this document and, in particular, in relation to FIGURE 14 below.
[00118] FIGURES 14 (A) and 14 (B) together provide a detailed illustration of a combined example of video encryption and decryption system 1500 and the process 1400, arranged in accordance with at least some deployments of the present description. In the illustrated deployment, process 1400 may include one or more operations, functions or actions as illustrated by one or more of actions 1401 to 1423. As a non-limiting example, process 1400 will be described in this document with reference to the coding system example video 1500 that includes the encryption 100 of FIGURE 1 and the decoder 200 of FIGURE 2, as discussed further below in this document with reference to FIGURE 20. In several examples, process 1400 can be performed by a system that includes both a encryption as a decoder or by separate systems with a system that employs an encryption (and, optionally, a decoder) and another system that employs a decoder (and, optionally, an encryption). It is further noted, as discussed above, that an encryption may include a local decryption cycle that employs a local decoder as a
62/108 part of the encryption system.
[00119] In the illustrated deployment, the video encoding system 1500 may include the logical circuit system 1450, similar and / or combinations thereof. For example, logic circuit system 1450 can include encryption 100 and can include any modules as discussed with reference to FIGURE 1 (or FIGURE 17) and / or FIGURES 3 to 6 and decoder 200 and can include any modules as discussed in relation to FIGURE 2 and / or FIGURE 18. Although the video encoding system 1500, as shown in FIGURES 14 (A) and 14 (B), may include a specific set of blocks or actions associated with particular modules, such as blocks or actions can be associated with modules other than the specific modules illustrated in this document. Although the 1400 process, as illustrated, is aimed at encryption and decryption, the concepts and / or operations described can be applied to encryption and / or decryption separately and, in general, to video encoding.
[00120] Process 1400 can start in operation 1401, Receive Video Frame Input from a Video Sequence, in which the video frames input of a video sequence can be received through encryption 100, for example. The video frame can be any video photo, frame, photo or data suitable or similar for encoding.
[00121] Process 1400 can continue in operation 1402, Associate a Type of Photo with each Video Frame in a Group of Photos, where a type of photo can be associated with each video frame in a group of photos through the module 102 content pre-analyzer, for example. For example, the type of photo can be an F-B photo, a P-photo or a L-photo or similar. In some examples, a video sequence may include groups of photos and the processing described in this
63/108 document (for example, operations 1403 to 1411) can be performed on a frame or photo within a group of photos and processing can be repeated for all frames or photos in a group and then repeated for all groups of photos in a video sequence. In addition, the video frame can be of low resolution or high resolution.
[00122] Process 1400 can continue in operation 1403, Divide a Photo into Tiles, Coding Units or Superfragments, and Tiles, Coding Units or Superfragments into Potential Forecast Partitions, in which a photo can be divided into tiles, units of encoding or superfragments (e.g., portions of frames as discussed) and tiles, encoding units or superfragments can be divided into potential forecast partitions via the forecast partition generator 105, for example. Potential forecast partitions can include binary tree and k-d tree partitions, as discussed in this document. In some examples, the generated forecast partitions can be indexed with forecast partition index values. For example, each of the generated partitions (for example, based on binary tree or k-d tree partitioning) can be indexed with index values 1, 2, 3,, n. For example, the forecast partitions can be indexed on the partition generator module 105 of the encryption 100 and the forecast partitions and the index value forecast partition can be transmitted to the encryption controller module 103.
[00123] Process 1400 may continue in operation 1404, For Candidate Forecast Partitioning Combinations, Forecast Modes and Forecast Reference Types, Determine Forecast Parameters, where, for the combinations of each of the potential forecast partitioning (for example, partitioning
64/108 forecasting candidates), forecasting modes (for example, intra, inter, multi, jump, auto or split, as discussed) and forecast reference types (for example, the forecast reference photo type -synthesized or modified or the original photo and various combinations of past and future versions of such photos) the forecast (s) can be performed and the forecast parameters can be determined. For example, a range of potential forecast partitions (each of which has multiple forecast partitions), potential modes and potential reference types can be generated and the associated forecast (s), modes and types of reference can be determined. A better combination of such forecast partitions, modes (by forecast partition) and reference types (by forecast partition) can be determined using rate distortion optimization or the like. For example, the forecast (s) may include forecast (s) using multiple reference forecasts or intrapredictions based on characteristics and movement or similar.
[00124] For example, for each forecast partition, a mode (for example, intra, inter, multi, jump, auto or division as discussed) and a reference type (for example, a reference photo chosen from a wide range options based on past decoded photos, future decoded photos and photos based on such past and future decoded photos that include modified photos - for example, photos modified based on gain, registration, blur or dominant motion - or synthesized photos - by example, photos generated based on reference photos using super resolution photo generation or projection trajectory photo generation. For example, the following tables illustrate exemplary modes and the exemplary reference types available in such modes Other examples can be used, however, for each partition
65/108 forecast, a forecast mode indicator (for example, indicating a selected mode for the forecast partition) and a reference type indicator (for example, indicating a type of reference photo, if necessary ) can be generated and encrypted in a bit stream.
Number Forecast Partition Mode 0 Intra 1 Jump 2 Division 3 Self 4 Inter 5 Multi
TABLE 3: EXAMPLE FORECAST PARTITION MODES
Number Reference Types 0 MROn (= past SRO) 1 MR1n 2 MR2n 3 MR3n 4 MR5n (Past SR1) 5 MR6n (Past SR2) 6 MR7n (Past SR3) 7 MROd 8 MROg
TABLE 4: TYPES OF EXAMPLE REFERENCE PHOTOS FOR
P-PHOTOS IN THE INTERMODO
Number Reference Types 0 MROn 1 MR7n (= F proj.) 2 MR3n (= future SRO) 3 MR1n 4 MR4n (future SR1) 5 MR5n (SR2 Futura) 6 MR6n (SR3 Futura) 7 MROd 8 MR3d 9 MR0g / MR3g
TABLE 4: TYPES OF EXAMPLE REFERENCE PHOTOS FOR
PHOTOS-F / B IN THE INTERMODO
66/108 [00125] In Tables 3 and 4, the nomenclature to designate the Reference Photo Types occurs as follows: the front MR means Multiple References, SR means Super Resolution and F means future, the next number indicates a counter for the reference, where 0 indicates immediately previous, 1 the previous previous, 2 the previous previous, 3 the previous previous, 4 the first future, 5 newly generated super resolution photo based on 1, 6 newly super resolution photo -generated based on 2, 7 super-resolution photo just generated based on 3 and the next lower letter indicates n for no change, g for gain based modification, d for dominant motion based modification and b for base based modification in defocus. As discussed, the references provided in Tables 3 and 4 provide only exemplary references (for example, not all mathematically possible combinations) and, for example, only the most advantageous combinations of references based on the type of photo and mode. Such combinations can generate a codebook through which the encryption and decoder can communicate the chosen reference among the available types.
[00126] As discussed, in some examples, interpretation can be performed. In some examples, up to 4 past and / or future decoded photos and several modification / synthesis predictions can be used to generate a large number of reference types (for example, reference photos). For example in 'inter' mode, up to 9 types of references can be supported on P-photos (whose example is given in Table 3) and up to 10 types of references can be supported for F-B photos (whose example is provided in the Table
4). In addition, the 'multi' mode can provide a type of forecasting method in which, instead of 1 reference photo, 2 reference photos can be used and P-photos and F / B photos, respectively, can allow 3 and up to 8 reference types. For example, the forecast can
67/108 to be based on a decoded frame previously generated using at least one of the modification and synthesis techniques. In such examples, the bit stream (discussed below in relation to operation 1412) can include a frame reference, modification parameters or synthesization parameters associated with the prediction partition. The combination of predicted generated partitions using 2 (for example, multi) references can be based on an average, a weighted average or similar.
[00127] Process 1400 can continue in operation 1405, For each Candidate Forecast Partitioning, Determine Actual Forecast Partitions Used to Generate Forecast Error Data Partitions, where, for each candidate forecast partitioning, the actual forecast partitions can be generated. For example, for each candidate forecast partitioning and reference modes and types (and associated forecast partitions, forecast (s) and forecast parameters), a forecast error can be determined. For example, determining the potential forecast error may include differentiating the original pixels (for example, original pixel data from a forecast partition) from the forecast pixels. As discussed, in some examples, the partition of forecast error data may include forecast error data generated based, at least in part, on a decoded frame previously generated using at least one of a modification technique or a synthesizing technique.
[00128] Process 1400 can continue in operation 1406, select the best forecast partitioning and forecast type and save the forecast modes, forecast reference types and corresponding forecast parameters, where a forecast partitioning and forecast type can be selected and the associated parameters (for example, forecast modes, reference types and
68/108 parameters) can be saved for bitstream encryption. In some examples, potential forecast partitioning, where a minimum forecast error can be selected. In some examples, potential forecast partitioning can be selected based on rate distortion optimization (RDO). In some examples, the associated forecast parameters can be stored and encrypted in the bit stream as discussed for transmission to and use by a decoder.
[00129] Process 1400 can continue in operation 1407, Perform Adaptive Fixed Transformations to Content with Multiple Block Sizes in Multiple Potential Partitioning of Partition Forecast Error Data, where fixed or adaptive transformations to content with various sizes blocking can be performed on several potential coding partitions of the partition prediction error data. For example, partition prediction error data can be partitioned to generate a plurality of potential encoding partitions for encoding partitions. For example, partition prediction error data can be partitioned by a binary tree encoding partition module or a k-d tree encoding partition module of the 107 encoding partition generator module as discussed in this document. In some examples, the partition prediction error data associated with an F-B / photo or P-photo can be partitioned by a binary tree encoding partition module. In some examples, the video data associated with a photo-l (for example, tiles or encoding units or superfragments in some examples) can be partitioned by a k-d tree encoding partitioner module. In some examples, an encoding partition module can be chosen or selected via a switch or switches. Per
69/108 example, partitions can be generated by the encoding partition generator module 107. As discussed, in some examples, it is possible to determine whether forecast error or residual data partitions require encryption. For example, if the residue is greater than or equal to a threshold (for example, a predefined threshold or a heuristically determined threshold or similar), it can be considered that the residue can be considered to require encryption. If the residue is less than the threshold, it can be considered that the residue does not require encryption. For example, it is possible to determine that an individual waste requires encryption.
[00130] Coding partitioning and various combinations of fixed and adaptive transformations performed on coding partitioning can be used to determine an ideal or selected coding partitioning and associated transformations based on rate distortion optimization or the like, as discussed below . In some examples, the generated encoding partitions (for example, based on the k-d tree partitioning) can each be indexed with index values 1, 2, 3, ..., m. For example, encryption partitions can be indexed in the encryption partition generator module 107 of the encryption 100. The generated encryption partitions and the associated encryption partition index values can be transmitted to an encryption controller 103.
[00131] * lncomplete Process 1400 can continue in operation 1408, Determine the Best Coding Partitioning, Transformation Block Size and Actual Transformation, where the best coding partitioning, transformation block sizes and actual transformations can be determined. For example, multiple encoding partitions (for example, that have multiple encoding partitions) can be evaluated based on RDO or with
70/108 base on something else to determine a selected encoding partitioning (which can also include an additional division of encoding partitions into transformation blocks when the encoding partitions do not match a transformation block size as discussed). For example, the actual transformation (or selected transformation) can include any content adaptive transformation or fixed transformation performed on the coding partition or block sizes as described in this document. The selected encoding partitioning can be encoded using encoding partition indicators or encoding partition code words as discussed in this document for bitstream encryption. Similarly, the chosen transformations can be coded using a code book or indicators or similar and encrypted in [00132] Process 1400 can continue in operation 1409, Quantizer and Scan the Transformation Coefficients, where the transformation coefficients associated with the coding partitions (and / or transformation blocks; for example, the transformation coefficients generated by the selected transformations based on the selected coding partitioning) can be quantized and scanned in preparation for entropy coding.
[00133] Process 1400 can continue in operation 1410, Rebuild Pixel Data, Mount on Photo and Save to Temporary Reference Photo Storage, where pixel data can be reconstructed, mounted on a photo and saved to storage temporary reference photo. For example, after a local decoding cycle (for example, which includes reverse scanning, reverse transformation, and mounting encoding partitions), prediction error data partitions can be
71/108 generated. Forecast error data partitions can be added with a forecast partition to generate reconstructed forecast partitions, which can be mounted on tiles, coding units or superfragments. The mounted tiles, coding units or superfragments can be optionally processed through deblocking filtration and / or quality restoration filtration and assembled to generate a photo. The photo can be saved to the decoded photo temporary storage 119 as a reference photo (as discussed) for the prediction of other photos (for example, following ones).
[00134] Process 1400 can continue in operation 1411, Encrypt by Entropy the Data Associated with Each Tile, Coding Unit or Superfragment, in which the data associated with each tile, coding unit or superfragment can be encrypted by entropy. For example, the data associated with each tile, coding unit or superfragment of each photo in each group of photos in each video sequence can be encrypted by entropy. Entropy-encrypted data can include data (for example, inter-forecast or intra-forecast data) associated with forecast partitions (for example, forecast coding indicators or code words), mode data, reference type data, parameters prediction, motion vectors, data that define coding partitions (for example, coding partition indicators or codewords), block size data to perform selected transformations, transformation type data (which indicate the selected transformations) , a quantizer (Qp; which indicates a quantization granularity) and quantized transformation coefficients.
[00135] Process 1400 can continue in operation 1412, Generate Bit Stream in which a bit stream can be generated with
72/108 based on entropy encrypted data. As discussed, in some examples, the bit stream may include a frame or photo reference (for example, data indicating a type of photo or frame), the data (for example, inter-forecast or intra-forecast data) associated with the partitions of forecasting (for example, forecast coding indicators or code words), mode data, reference type data, forecasting parameters (for example, modification parameters or synthesizing parameters associated with reference types), motion vectors , data defining the coding partitions (for example, coding partition indicators or codewords), block size data to perform the selected transformations, transformation type data (which indicates the selected transformations), a quantizer ( Qp; which indicates a quantization granularity) and quantized transformation coefficients associated with a forecast partition.
[00136] Process 1400 can continue in operation 1413, Transmit Bit Stream, in which the bit stream can be transmitted. For example, the video encoding system 2000 can transmit the output bit stream 111, bit stream 1400 or the like through an antenna 2002 (see FIGURE 20).
[00137] Operations 1401 to 1413 can provide video encryption and bitstream transmission techniques, which can be employed by an encryption system as discussed in this document. The following operations, operations 1414 to 1423, can provide video decoding and video display techniques, which can be employed by a decoder system as discussed in this document.
[00138] Process 1400 can continue in operation 1414, Receive Bit Stream, in which the bit stream can be received. For example, the input bit stream 201, bit stream 1400 or
73/108 can be received through decoder 200. In some examples, the bit stream may include data associated with an encoding partition, one or more indicators, data defining the encoding partition (s), forecast data and / or data that defines forecasting partition (s) as discussed above. In some examples, the bit stream may include the data (for example, inter-forecast or intra-forecast data) associated with the forecast partitions, data that define the forecast partitions, and the data associated with an individual forecast error data partition. In some examples, the bit stream may include a frame or photo reference (for example, data indicating a type of photo or frame), the data (for example, inter-forecast or intra-forecast data) associated with the forecast partitions (for example, prediction coding indicators or code words), mode data, reference types data, forecast parameters (for example, modification parameters or synthesis parameters associated with reference types), motion vectors, data that define the coding partitions (for example, coding partition indicators or codewords), block size data to perform selected transformations, transformation type data (which indicates the selected transformations), a quantizer (Qp; which indicates a quantization granularity) and quantized transformation coefficients associated with a forecast partition, [00139] Process 1400 can continue in operation 1415, Decode Bit Stream, in which the received bit stream can be decoded through adaptive entropy decoder module 202, for example. For example, the received bit stream can be decoded by entropy to determine the prediction partitioning, the prediction parameters, the selected encoding partitioning, the data of selected characteristics, vector data
74/108 of motion, quantized transformation coefficients, filter parameters, selection data (such as mode selection data), indicators, the data (for example, inter- or intra-forecast data) associated with the forecast partitions, data that defines the forecast partitions and the data associated with an individual or similar forecast error data partition. In some examples, the bit stream can be decoded to determine a frame or photo reference (for example, data indicating a type of photo or frame), the data (for example, inter-forecast or intra-forecast data) associated with partitions prediction (for example, forecast coding indicators or code words), mode data, reference types data, forecast parameters (for example, modification parameters or synthesis parameters associated with reference types), motion vectors , data defining the coding partitions (for example, coding partition indicators or codewords), block size data to perform the selected transformations, transformation type data (which indicates the selected transformations), a quantizer ( Qp; which indicates a quantization granularity) and / or quantized transformation coefficients associated with a forecast partition to.
[00140] Process 1400 can continue in operation 1416, Perform Inverse Scan and Inverse Quantization on Each Block of Each Coding Partition, in which an inverse scan and inverse quantization can be performed on each block of each coding partition for the partition forecast that is processed. For example, reverse scanning and reverse quantization can be performed using the adaptive reverse quantization module 203.
[00141] Process 1400 can continue in operation 1417, Perform Fixed or Adaptive Reverse Transformation to Content to Decode Transformation Coefficients to Determine the Parti
75/108 Decoded Forecast Error Data, where a fixed or adaptive reverse transformation to the content can be performed to decode the transformation coefficients to determine the decoded forecast error data partitions. For example, the reverse transformation can include a content-adaptive reverse transformation, such as a hybrid parametric reverse Haar transformation, just like the hybrid parametric reverse Haar transformation can include a parametric reverse Haar transformation in a direction of the parametric transformation direction. and an inverse cosine transformation distinct in an orthogonal direction in relation to the parametric direction of transformation. In some examples, the fixed inverse transformation may include a distinct cosine reverse transformation or a distinct cosine reverse transformation approximation. For example, the fixed or adaptive transformation to the content can be performed through the adaptive reverse transformation module 204. As discussed, the adaptive reverse transformation to the content can be based on other previously decoded data, such as, for example, neighboring partitions or blocks decoded. In some instances, generating the decoded prediction error data partitions may include mounting decoded encoding partitions via the 205 encoding partition assembler module.
[00142] Process 1400 can continue in operation 1418, Generate Predicted Partitions (Pixel Data) for Each Forecast Partition, where forecast pixel data (for example, a predicted partition) can be generated for each forecast partition . For example, forecast pixel data can be generated using the selected forecast mode and reference type (for example, based on characteristics and movement or intra or other types; and based on several decoded reference photos , modified
76/108 and / or synthesized as discussed) and associated forecast parameters (for example, modification or synthesis parameters, if necessary).
[00143] Process 1400 can continue in operation 1419, Add Each Decoded Forecast Error Partition to Corresponding Predicted Partition to Generate the Reconstructed Forecast Partition, where each decoded forecast error partition (for example, which includes zero partitions forecast error) can be added to the corresponding forecast partition to generate a reconstructed forecast partition. For example, predicted partitions can be generated through the decoding cycle illustrated in FIGURE 2 and added through adder 206 to the decoded prediction error partitions.
[00144] Process 1400 can continue in operation 1420, Mount Reconstructed Forecast Partitions to Generate Decoded Tiles, Coding Units or Superfragments, in which the reconstructed forecast partitions can be mounted to generate decoded tiles, encoding units or superfragments. For example, reconstructed partitions can be assembled to generate decoded tiles, coding units or superfragments via the forecast partition builder module 207.
[00145] Process 1400 can continue in operation 1421, Apply Unblocking Filtration and / or QR Filtration to Generate Final Decoded Tiles, Coding Units or Superfragments, where optional unlocking and / or quality restoration filtration can be applied to the decoded tiles, coding units or superfragments to generate final decoded tiles, coding units or superfragments. For example, the optional deblocking filtration can be applied via the deblocking filtration module 208 and / or the filtering of res
77/108 optional quality filling can be applied via the 209 quality restoration filtration module.
[00146] Process 1400 can continue in operation 1422, Assemble Decoded Tiles, Encoding Units or Superfragments to Generate a Decoded Video Photo and Save to the Temporary Reference Photo Stores, in which decoded tiles, encoding units or superfragments ( or final decoded) can be mounted to generate a decoded video photo and the decoded video photo can be saved in the reference photo temporary storage (for example, decoded photo temporary storage 210) for use in a future forecast.
[00147] Process 1400 can continue in operation 1423, Transmit Frame of Decoded Videos for Presentation Through a Display Device, in which the decoded video frames can be transmitted for presentation through a display device. For example, decoded video photos can be further processed through the adaptive photo reorganizer 217 and the post-restorative content module 218 and transmitted to a display device such as the video frames of display video 219 for presentation to a user. For example, the video frame (s) can be transmitted to a display device 2005 (as shown in FIGURE 20) for presentation.
[00148] Although the implementation of the exemplary processes in this document may include the execution of all the operations shown in the order illustrated, the present description is not limited in that sense and, in several examples, the implementation of the exemplary processes in this document may include the execution of only a subset of the operations shown and / or in an order other than the illus78 / 108 shown.
[00149] In addition, any one or more of the operations discussed in this document may be performed in response to instructions provided by one or more computer program products. Such program products may include instructions that provide signal-supporting means that, when performed, for example, by a processor, can provide the functionality described in this document. Computer program products may be provided in any form in one or more machine-readable media. Thus, for example, a processor that includes one or more processor cores may perform one or more of the operations of the example processes in this document in response to the program code and / or instructions or instruction sets carried to the processor via one or more more machine-readable media. In general, a machine-readable medium can carry software in the form of program code and / or instructions or instruction sets that can cause any of the devices and / or systems described in this document to deploy at least portions of the video systems as discussed in this document.
[00150] As used in any deployment described in this document, the term module refers to any combination of software logic, firmware logic and / or hardware logic configured to provide the functionality described in this document. The software may be incorporated as a software package, code and / or instruction set or instructions and hardware, as used in any deployment described in this document, may include, for example, solely or in any combination, physically connected circuitry, programmable circuitry, state machine circuitry and / or firmware that stores the
79/108 instructions executed by the programmable circuitry. The modules can, collectively or individually, be incorporated as a set of circuits that form part of a larger system, for example, an integrated circuit (IC), board system (SoC) and so on. For example, a module can be incorporated into the logic circuitry for deployment through software, firmware or hardware for the coding system as discussed in this document.
[00151] FIGURE 15 is an illustrative diagram of the exemplary video encoding system 1500, arranged according to at least some implementations of the present description. In the illustrated deployment, the video encoding system 1500 may include the imaging device (s) 1501, video encoder 100 and / or a video encoder deployed via logic circuitry 1450 of unit (s) of processing 1520, an antenna 1502, one or more processor (s) 1503, one or more memory storage (s) 1504, and / or the display device 1505.
[00152] As illustrated, imaging device (s) 1501, antenna 1502, processing unit (s) 1520, logic circuit set 1450, video encoder 100, video decoder 200, processor (s) 1503, storage (s) ) of memory 1504 and / or display device 1505 may have the ability to communicate with each other. As discussed, although it is illustrated with both video encoder 100 and video decoder 200, video encoding system 1500 can include only video encoder 100 or only video decoder 200 in several examples.
[00153] As shown, in some examples, the video encoding system 1500 may include antenna 1502. Antenna 1502 can be configured to transmit or receive a bit stream
80/108 encoded video data, for example. In addition, in some instances, the video encoding system 1500 may include display device 1505. Display device 1505 may be configured to display video data. As shown, in some examples, logic circuitry 1450 can be deployed via processing unit (s) 1520. Processing unit (s) 1520 may include application-specific integrated circuit logic (ASIC), general-purpose graphics processor (s), processors or the like. The video encoding system 1500 may also include optional processor (s) 1503, which may similarly include application specific integrated circuit (ASIC) logic, graphics processor (s) ), processors) for general use or similar. In some examples, logic circuitry 1450 can be deployed using dedicated video encoding hardware or hardware and the like (1503 processors) can be deployed in general purpose software or operating systems or the like. In addition, memory stores 1504 can be any type of memory such as volatile memory (for example, Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), etc.) or non-volatile memory (for example , fast memory, etc.), and so on. In a non-limiting example, memory stores 1504 can be deployed by cache memory. In some examples, logic circuitry 1450 can access memory stores 1504 (for the deployment of a temporary photo storage, for example). In other examples, logic circuitry 1450 and / or processing unit (s) 1520 may include memory stores (e.g., cache or the like) for deploying temporary photo storage or the like.
81/108 [00154] In some instances, video encoder 100 deployed via logic circuitry may include temporary photo storage (for example, through or through 1520 processing unit (s) or ) memory storage (s) 1504)) and a graphics processing unit (for example, through 1520 processing unit (s)). The graphics processing unit can be connected communicatively to the temporary photo storage. The graphics processing unit may include video encoder 100 as deployed through logic circuitry 1450 to incorporate the various modules, as discussed with reference to FIGURE 1 and FIGURE 12. For example, the graphics processing unit may include the forecasting partition generator logic circuitry, adaptive photo organizer logic circuitry, interprevision logic system, motion compensation generation logic circuitry, differentiation logic circuitry, set of subpartition generator logic circuits, adaptive transformation logic circuit system, adaptive entropy encoder logic circuit system, and so on. The logic circuitry can be configured to perform the various operations as discussed in this document. For example, the forecast partition generator logic circuitry can be configured to receive a video frame, segment the video frame into a plurality of tiles or encoding unit or superfragments, determine a chosen forecast partitioning technique for at least one tile, coding unit or superfragment such that the prediction partitioning technique chosen comprises at least one of a binary tree partitioning technique or a kd tree partitioning technique and partition of at least one tile, unit of coding or
82/108 superfragment in a plurality of forecast partitions with ο use of the chosen partitioning technique. Video decoder 200 can be deployed in a similar manner.
[00155] In some examples, the antenna 1502 of the video encoding system 1500 can be configured to receive an encoded bit stream of video data. The video encoding system 1500 may also include the video decoder 200 coupled to the antenna 1502 and configured to decode the encoded bit stream. For example, video decoder 200 can be configured to entropy decode the encoded bit stream to determine the interpretation data associated with a first forecast partition, data defining the first forecast partition, intraprediction data associated with a second forecast partition and data that define the second forecast partition in such a way that the first forecast partition comprises the binary tree partition and the second forecast partition comprises the kd tree partition, to perform motion compensation for the first partition prediction based, at least in part, on the interprevision data, perform the intraprevision for the second individual partition based, at least in part, on the intraprevision data, generate a first decoded video frame based at least in part motion compensation and transmit the first and second video deco frames for presentation via a display device.
[00156] In the modalities, the resources described in this document can be realized in response to the instructions provided by one or more computer program products. Such program products may include instructions that provide signal-supporting means that, when performed by, for example, a processor, can provide the functionality described in this document. Pro products
83/108 gram of computer can be supplied in any form in one or more machine-readable media. Thus, for example, a processor that includes one or more processor cores (s) may execute one or more features described in this document in response to the program code and / or instructions or instruction sets directed to the processor via one or more more machine-readable media. In general, a machine-readable medium can drive the software in the form of program code and / or instructions or instruction sets that can cause any of the devices and / or systems described in this document to deploy at least portions of the features described in this document.
[00157] FIGURE 16 is an illustrative diagram of an exemplary system 1600, arranged according to at least some implementations of the present description. In several deployments, the 1600 system can be a media system although the 1600 system is not limited to that context. For example, the 1600 system can be incorporated into a personal computer (PC), laptop computer, ultralaptop computer, tablet computer, touch sensitive element, laptop computer, handheld computer, palmtop computer, assistant personal digital device (PDA), cell phone, cell phone / PDA combination, television, smart device (for example, smart phone, smart tablet computer or smart television), mobile internet device (MID), messaging device, device data communication, cameras (for example, point and shoot cameras, superzoom cameras, digital single-lens reflex (DSLR) cameras) and so on.
[00158] In several deployments, the 1600 system includes a platform 1602 coupled to a 1620 display. Platform 1602 can receive content from a content device such as device (s)
84/108 1630 content service or 1640 content delivery device (s) or other similar content sources. A 1650 navigation controller that includes one or more navigation features can be used to interact, for example, with the 1602 platform and / or the 1620 display. Each of these components is described in more detail below.
[00159] In various deployments, platform 1602 can include any combination of a set of integrated circuits 1605, processor 1610, memory 1612, antenna 1613, storage 1614, graphics subsystem 1615, applications 1616 and / or radio 1618. The set of circuits 1605 can provide intercommunication between processor 1610, memory 1612, storage 1614, graphics subsystem 1615, applications 1616 and / or radio 1618. For example, integrated circuit set 1605 may include a storage adapter (not shown) to provide intercommunication with 1614 storage.
[00160] The 1610 processor can be deployed as a Complex Instruction Set Computer (CISC) processor or a Reduced Instruction Set Computer (RISC) processor, x86 instruction set compatible processors, multiple cores or any other microprocessor or central processing unit (CPU). In various deployments, the 1610 processor can be dual-core processor (s), mobile dual-core processor (s), and so on.
[00161] 1612 memory can be deployed as a volatile memory device such as, but not limited to, Random Access Memory (RAM), Dynamic Random Access Memory (DRAM) or Static RAM (SRAM).
[00162] Storage 1614 may be deployed as a non-volatile storage device such as, but without limitation,
85/108 a magnetic disk drive, optical disk drive, tape drive, an internal storage device, an attached storage device, fast memory, battery backup SDRAM (synchronous DRAM) and / or an accessible storage device per network. In various deployments, 1614 storage may include technology to increase storage performance-enhanced protection for valuable digital media when multiple hard drives are included, for example.
[00163] The graphic subsystem 1615 can perform image processing such as static or video for display. The graphics subsystem 1615 can be a graphics processing unit (GPU) or a visual processing unit (VPU), for example. An analog or digital interface can be used to connect, in a communicative way, the graphic subsystem 1615 and the display 1620. For example, the interface can be any one of a High Definition Multimedia Interface, DisplayPort, wireless HDMI and / or techniques compatible with wireless HD. The graphics subsystem 1615 can be integrated into the processor 1610 or integrated circuit set 1605. In some deployments, the graphic subsystem 1615 can be an independent device communicatively coupled to the integrated circuit set 1605.
[00164] The graphic and / or video processing techniques described in this document can be implemented in various hardware architectures. For example, the graphics and / or video functionality can be integrated into a set of integrated circuits. Alternatively, a graphics and / or video processor can be used. In yet another implementation, graphics and / or video functions can be provided by a general purpose processor, including a multi-core processor. In additional modalities, the functions can be implanted in an electronic device
86/108 consumer.
[00165] The 1618 radio may include one or more radios capable of transmitting and receiving signals with the use of various suitable wireless communication techniques. Such techniques may involve communications over one or more wireless networks. Exemplary wireless networks include (but are not limited to) wireless local area networks (WLANs), wireless personal area networks (WPANs), wireless metropolitan area networks (WMANs), cellular networks, and satellite networks. When communicating through such networks, the 1618 radio can operate according to one or more standards applicable in any version.
[00166] In various deployments, the 1620 display can include any television-type monitor or viewfinder. The 1620 display may include, for example, a computer display screen, touch screen display, video monitor, television-type device and / or a television. The 1620 display can be digital and / or analog. In various deployments, the 1620 display can be a holographic display. In addition, the 1620 display can be a transparent surface that can receive a visual projection. Such projections can lead to various forms of information, images and / or objects. For example, such projections can be a visual overlay for a mobile augmented reality (MAR) application. Under the control of one or more 1616 software applications, platform 1602 can display user interface 1622 on the 1620 display.
[00167] In various deployments, the 1630 content service device (s) can be hosted by a national, international and / or independent service and then accessible to the 1602 platform via the Internet, for example . Content service device (s) 1630 can be coupled to platform 1602 and / or display 1620. Platform 1602 and / or content service device (s) 1630 can ( m) be coupled to a 1660 network to communicate (for example, send and / or receive) media information to and from
87/108 from the 1660 network. The 1640 content service device (s) can be coupled to platform 1602 and / or display 1620.
[00168] In various deployments, the 1630 content service device (s) may include a cable television box, personal computer, network, telephone, Internet enabled devices or device capable of delivering information digital and / or content and any other similar device capable of communicating unidirectional or bidirectionally the content between content providers and platform 1602 and / display 1620, via network 1660 or directly. It will be noted that the content can be communicated unidirectionally and / or bidirectionally to and from any of the components in the 1600 system and a content provider over the 1660 network. Examples of content can include any media information that includes, for example , video, music, medical and game information, and so on.
[00169] The 1630 content service device (s) can receive content such as cable television programming that includes media information, digital information and / or other content. Examples of content providers can include any cable or radio or Internet content providers. The examples provided are not intended to limit deployments according to the present description in any way.
[00170] In several deployments, platform 1602 can receive control signals from the 1650 navigation controller that has one or more navigation features. The 1650 controller navigation features can be used to interact with the 1622 user interface, for example. In several embodiments, the 1650 navigation controller can be a pointing device that can be a component of computer hardware (specifically, a human interface device) that allows a user to enter spatial data
88/108 (for example, continuous and multidimensional) on a computer. Various systems such as graphical user interfaces (GUI) and televisions and monitors allow the user to control and provide data to the computer or television using physical gestures.
[00171] The movements of the navigation features of the 1650 controller can be replicated in a viewfinder (for example, viewfinder 1620) by the movements of an indicator, cursor, focus ring or other visual indicators displayed on the viewfinder. For example, under the control of 1616 software applications, the navigation features located on the 1650 navigation controller can be mapped to the virtual navigation features displayed in the 1622 user interface, for example. In various embodiments, controller 1650 may not be a separate component, but may be integrated into platform 1602 and / or display 1620. The present description, however, is not limited to the elements or the context shown or described in this document.
[00172] In several deployments, the triggers (not shown) can include technology to enable users to instantly turn the 1602 platform on and off, like a television at the touch of a button after a first start, when enabled, for example. Program logic can allow platform 1602 to stream content to adapters and media or other 1630 content service device (s) or 1640 content delivery device (s) even when the platform is off. In addition, the 1605 integrated circuit pack may include hardware and / or software support for 5.1 surround sound audio and / or high definition 7.1 surround sound audio, for example. Triggers can include a graphics trigger for integrated graphics platforms. In various embodiments, the graphics driver may comprise an Express Peripheral Component Interconnect (PCI) graphics card.
89/108 [00173] In several deployments, any one or more of the components shown in the 1600 system can be integrated. For example, platform 1602 and 1630 content services device (s) can be integrated, or platform 1602 and 1640 content delivery device (s) can be integrated or platform 1602, content service device (s) 1630 and content delivery device (s) 1640 can be integrated, for example. In various embodiments, platform 1602 and display 1620 can be an integrated unit. The display 1620 and the content service device (s) 1630 can be integrated or the display 1620 and the content delivery device (s) 1640 can be integrated, for example. Such examples are not intended to limit the present description.
[00174] In several modalities, the 1600 system can be deployed as a wireless system, a wired system or a combination of both. When deployed as a wireless system, the 1600 system can include the components and interfaces suitable for communicating via shared wireless means, such as one or more antennas, transmitters, receivers, transceivers, amplifiers, filters, control logic and so on. onwards. An example of wireless shared media can include portions of a wireless spectrum, such as the RF spectrum and so on. When deployed as a wired system, the 1600 system can include components and interfaces suitable for communicating through wired communication media, such as input / output (I / O) adapters, physical connectors to connect the I / O adapter The corresponding wired communication medium, a network interface card (NIC), disk controller, video controller, audio controller and the like. Examples of wired media may include wire, cable, metal conductors, printed circuit board (PCB),
90/108 data bus, switch fabric, semiconductor material, twisted pair wire, coaxial cable, optical fibers and so on.
[00175] Platform 1602 can establish one or more logical or physical channels to communicate information. The information can include media information and control information. Media information can refer to any data that represents content intended for a user. Examples of content can include, for example, data from a voice conversation, video conference, streaming video, email message (email), voicemail message, alphanumeric symbols, graphics, image, video, text and so on. onwards. The data of a voice conversation can be, for example, speech information, periods of silence, background noise, comfort noise, tones and so on. The control information can refer to any data that represents commands, instructions or control words intended for an automated system. For example, control information can be used to route media information through a system or instruct a node to process media information in a predetermined manner. The modalities, however, are not limited to the elements or in the context shown or described in FIGURE 16.
[00176] As described above, the 1600 system can be incorporated in varying physical styles or form factors. FIGURE 17 illustrates deployments of a small form factor device 1700 in which the system 1700 can be incorporated. In several embodiments, for example, the 1700 device can be deployed as a mobile computing device that has wireless capabilities. A mobile computing device can refer to any device that has a processing system and a mobile power supply or supply, such as one or more batteries, for example.
91/108 [00177] As described above, examples of a mobile computing device may include a personal computer (PC), laptop type computer, ultralaptop type computer, tablet type computer, touch pad, portable computer, computer hand, palmtop computer, personal digital assistant (PDA), cell phone, cell phone / PDA combination, television, smart device (for example, smart phone, smart tablet or smart television), mobile internet device (MID), device messaging service, data communication device, cameras (for example, point-and-shoot cameras, superzoom cameras, digital single lens reflex cameras (DSLR) cameras) and so on.
[00178] Examples of a mobile computing device may also include computers that are arranged to be worn by a person, such as a wrist computer, finger computer, ring computer, glasses computer, belt buckle computer, armband computer, shoe computers, clothing computers and other wearable computers. In various modalities, for example, a mobile computing device can be deployed as a smart phone that can run computer applications, as well as voice communications and / or data communications. Although some modalities can be described with a mobile computing device implanted as a smart phone, for example, it can be seen that other modalities can be implanted, also, with the use of other wireless mobile computing devices. The modalities are not limited in this context.
[00179] As shown in FIGURE 17, device 1700 can include a housing 1702, a display 1704, an input / output (I / O) device 1706 that can include user interface 1710 and a
92/108 antenna 1708. The 1700 device may also include navigation features 1712. The display 1704 may include any display unit suitable for displaying information suitable for a mobile computing device. The I / O device 1706 can include any I / O device suitable for entering information into a mobile computing device. Examples for the 1706 I / O device may include an alphanumeric keyboard, a numeric keypad, a touch pad, insertion keys, buttons, switches, oscillating switches, microphones, speakers, voice recognition software and device and so on. Information can also be entered into the 1700 device via a microphone (not shown). Such information can be digitized by a voice recognition device (not shown). The modalities are not limited in this context.
[00180] FIGURES 18 (A), 18 (B), and 18 (C) illustrate the forecasting partitions and exemplary coding partitions for a video frame, arranged according to at least some of the implementations of the present description. As discussed in this document, a video frame can be partitioned into forecast partitions (for example, using kd and binary tree partitioning) and additionally partitioned (in some examples) into encoding partitions (for example, with the use of binary tree partitioning). For example, FIGURE 18 (A) illustrates an example video frame 1810. Video frame 1810 can be any video frame as discussed in this document. In the example of FIGURE 18 (A), video frame 1810 can be a photo-l and subjected to the prediction partitioning of tree k-d. For example, the video frame 1810 can be divided or segmented into tiles 1820-1, 1820-2, 1820-3, 1820-4 and so on (other tiles are not identified for clarity). As discussed,
93/108 in other examples, the 1810 video frame can be segmented into encoding units or superfragments or the like.
[00181] In any case, the 1810 video frame (or the tiles, superfragments or coding units of the same) can be partitioned into 1830 forecast partitions (which are not all identified for clarity). For example, as shown, 1830 forecast partitions can be kd tree partitions for an 1850 photo-l video frame. As discussed, 1830 forecast partitions can be an example partitioning of the 1810 video frame. 1810 video can be partitioned into any number of partitions, which can be evaluated, for example, for better partitioning or more efficient, which can be encoded using partition indicators or code words or similar. The prediction partitions of the best partitioning can be used as a structure (or cuts) for coding as discussed in this document (for example, the generation of predicted pixel data partitions, prediction error data partitions (for example, error) and additional partitioning for an encoding (for example, encoding partitions).
[00182] FIGURE 18 (B) illustrates an exemplary 1850 video frame. The 1850 video frame can be any video frame as discussed in this document. In the example in FIGURE 18 (A), video frame 1850 can be a P-photo or a B-F photo and subjected to binary tree prediction partitioning. For example, the 1850 video frame can be divided or segmented into 1860-1, 1860-2, 1860-3 tiles and so on (other tiles are not identified for clarity). As discussed, in other examples, the 1850 video frame can be segmented into encoding units or superfragments or the like. The 1850 video frame (or the tiles, superfragments or encoding units of the
94/108) can be partitioned into 1870 forecast partitions (which are not all identified for clarity). For example, as shown, 1870 forecast partitions can be portions of a binary tree for a P-photo or a B-F photo / 1850 video frame. As discussed, 1870 forecast partitions can be an exemplary partitioning of the 1850 video. The 1850 video frame can be partitioned into any number of partitions, which can be evaluated, for example, for better partitioning or more effective, which can be encoded using partition indicators or code words or similar as discussed.
[00183] FIGURE 18 (C) illustrates an example tile 1860-3 of the example video frame 1850. As shown, tile 1860-3 can be partitioned into forecast partitions 1870 (illustrated with black lines). As discussed, 1870 forecast partitions can be used for encoding, so that predicted partitions can be generated associated with 1870 forecast partitions and prediction error data partitions can be generated (for example, predicted partitions can be generated be differentiated from the original pixel data to generate the forecast error data partitions), which are also associated with the 1870 forecast partitions. A determination can be made as to whether the forecast error data partitions need to be coded and, if needed, prediction error data partitions can be further partitioned into encoding partitions (for transformation coding, quantization of transformation coefficients, and incorporation of the quantization transformation coefficients into a bit stream). As discussed, such partitioning into encoding partitions can be accomplished using a binary tree partitioning technique. Per
95/108 example, the example encoding partitions 1880-1, 1880-2, and so on (illustrated with white lines; not all encoding partitions have been identified for clarity) can be generated based on 1870 forecast partitions (and the forecast error data partitions associated with the 1870 forecast partitions).
[00184] As discussed, encoding partitions 1880-1, 1880-2 and so on, can represent exemplary encoding partitioning. In some examples, several encoding partitions (and many combinations of transformation types) can be evaluated to determine better or more effective encoding partitioning (and associated transformation types and size). The encoding partitions associated with the selected encoding partitioning can be encoded using encoding partition indicators or code words or the like and encrypted using a bit stream. Furthermore, encoding partitions can be used to encode by transformation the prediction error data partitions to generate transformation coefficients, which can be quantized and encrypted by entropy in the bit stream for use in a decoder, for example. example.
[00185] Several modalities can be implemented with the use of hardware elements, software elements or a combination of both. Examples of hardware elements can include processors, microprocessors, circuits, circuit elements (for example, transistors, resistors, capacitors, inductors and so on), integrated circuits, application-specific integrated circuits (ASIC), logic devices (PLD), digital signal processors (DSP), field programmable port matrix (FPGA), logic ports, registers, semiconductor device,
96/108 chips, microchips, chip sets and so on. Software examples can include software components, programs, applications, computer programs, application programs, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods , procedures, software interfaces, application program interfaces (API), instruction sets, computer code, computer code, code segments, computer code segments, words, values, symbols or any combination thereof. Determining whether a modality is implemented using hardware elements and / or software elements can vary according to any number of factors, such as the desired computational rate, power levels, thermal tolerances, processing cycle budget, input data, output data rates, memory resources, data bus speeds and other design or performance constraints.
[00186] One or more aspects of at least one modality can be implemented through representative instructions stored in a machine-readable medium that represents various logic in the processor, which, when read by a machine, causes the machine to manufacture logic to perform the techniques described in this document. Such representations, known as IP cores, can be stored in a machine-readable, tangible medium and provided to multiple customers or production facilities to load on the manufacturing machines that, in fact, produce the logic or the processor.
[00187] Although certain features set out in this document have been described with reference to various deployments, this description is not intended to be interpreted in a limiting sense. For this reason, several modifications to the deployments described
97/108 in this document, as well as other implementations, which are evident to people skilled in the technique to which the present description belongs, are considered to fall within the spirit and scope of the present description.
[00188] The Following Examples Belong To Additional Modalities.
[00189] In one example, a computer-implemented method for partitioning in video encoding may include receiving a video frame, segmenting the video frame into a plurality of tiles, encoding units or superfragments, determining a prediction partitioning technique chosen for at least one tile, coding unit or superfragment for a prediction or coding partitioning so that the prediction partitioning technique chosen can include at least one of the binary tree partitioning technique, a kd tree partitioning technique , a codebook representation of a binary tree partitioning technique or a codebook representation of a kd tree partitioning technique, the partitioning of at least one tile, coding unit or superfragment into a plurality of partitions forecasting using the technique of chosen partitioning and the coding of partitioning indicators or code words associated with the plurality of forecast partitions in a bit stream.
[00190] In another example, a computer-implemented method for partitioning in video encoding may additionally include segmenting the video frame into two or more region layers so that the two or more region layers comprise an accuracy of at least one of 4 pixels, 8 pixels or 16 pixels, so that the two or more region layers comprise region boundaries, so that the segmentation of the video frame
98/108 video in the plurality of tiles, encoding units or superfragments can include segmenting the video frame in the plurality of superfragments, so that segmenting the video frame can include an encoding executed by symbol using the 16 x 16 blocks pixels and so that at least one superfragment can include an individual region layer between the two or more region layers, which encodes the region boundaries, so that the encoding of the region boundaries can include at least one within an encoding performed by symbol or the generation of a code book that approximates the limits of the region in a tile grid, so that the tile grid can include an equally spaced tile grid that is at least one size out of 32 x 32 pixels or 64 x 64 pixels, index the plurality of forecast partitions with the forecast partition index values, transmit the plurality of pa prediction conditions and partition prediction index values for a cryptographic controller, generate interpretation data associated with a first individual forecast partition from the plurality of forecast partitions so that the interpretation data can include motion vector data, generate intra-forecast data associated with a second individual forecast partition from a second plurality of forecast tiles from a second tile, coding unit or superfragment from a second video frame, differentiate a plurality of forecasted partitions associated with the plurality of forecast partitions with corresponding original pixel data to generate a corresponding plurality of prediction error data partitions so that a predicted first partition of the predicted plurality of partitions is predicted based, at least in part, on a frame of reference that includes a frame of reference immediately preceding, a more previous frame of reference, a future frame of reference, a frame of reference
99/108 modified reference or synthesized reference frame, generate a reference type indicator for the first predicted partition based on the reference frame, generate a forecast mode indicator based on a forecast mode of the first predicted partition, in which the prediction mode is selected from at least one of inter, multi, intra, jump, auto or division, encrypt the reference type indicator and the prediction mode in the bit stream, determine an error data partition of individual prediction of the plurality of forecast error data partitions must be encrypted, for the individual forecast error data partition which must be encrypted: partition the forecast error data partition into a plurality of coding partitions so that the partitioning of the prediction error data partition comprises a binary tree partitioning, indexing the plurality of partitions encoding values with encoding partition index values, transmitting the encoding partitions and encoding partition index values to the cryptography controller and performing a direct transformation and quantization on the encoding partitions of the prediction error data partition individual to generate data associated with the individual forecast error data partition, entropy encrypt the interpretation data associated with the first individual forecast partition, data that define the first individual forecast partition, the intra-forecast data associated with the second forecast partition individual, data defining the second individual prediction partition and the data associated with the individual prediction error data partition in a bit stream, transmitting the bit stream, receiving the bit stream, entropy decoding the bit stream to determine the interpretation data evision associated with the first individual forecast partition, the data that defines the first individual forecast partition, the associated intraprevision data
100/108 associated with the second individual forecast partition, the data defining the second individual forecast partition and the data associated with the individual forecast error data partition so that the first individual forecast partition comprises a binary tree partition and the second individual prediction partition comprises a kd tree partition, perform inverse quantization and inverse transformation based, at least in part, on the data associated with the individual prediction error data partition to generate decoded encoding partitions, combine the decoded encoding partitions to generate a decoded forecast error data partition, add a decoded forecast error data partition to the decoded forecast error data partition to generate a first reconstructed partition, mount the first reconstructed partition and a second reconst partition to generate at least one of a first tile, a first coding unit or a first superfragment, apply at least one of an unblocking filtration or a quality restoration filtration to the first tile, coding unit or the first superfragment to generate a final decoded first tile, encoding unit or superfragment, mount the first decoded tile, encoding unit or final decoded superfragment to a final decoded second tile, encoding unit or superfragment to generate a first decoded video frame, perform motion compensation for generate a second decoded individual forecast partition based, at least in part, on the interprevision data, perform intraprevision for a third individual decoded forecast partition from the second plurality of partitions based, at least in part, on the intraprevision data, ger air a second decoded video frame based, at least in part, on motion compensation, generate a third frame
101/108 dro video decoded based, at least in part, on the intraprevision and transmit the first, second, and third video frames decoded for presentation through a display device. Segmentation of the video frame in the plurality of tiles or encoding unit or superfragments may include segmenting the video frame in the plurality of tiles. Determining the chosen partitioning technique may include determining the chosen partitioning technique based, at least in part, on a type of photo in the video frame. The type of photo can include at least one of a photo-l (intra-photo), a photo-P (forecast photo) or a photoF / B (functional / bidirectional photo). The type of photo can include the photo-l and the prediction partitioning technique chosen can include the k-d tree partitioning technique. The type of photo can include the P-photo and the prediction partitioning technique chosen can include the binary tree partitioning technique. The type of photo can include the F-B photo and the prediction partitioning technique chosen can include the binary tree partitioning technique. Determining the chosen partitioning technique may include determining the chosen partitioning technique based, at least in part, on a feature of the at least one tile, coding unit or superfragment, so that the feature comprises an expected amount of intrablocks in the at least least one tile, coding unit or superfragment and so that the chosen partitioning technique can include the kd tree partitioning technique when the expected amount of intrablocks is greater than a threshold and so that the chosen partitioning technique can include the binary tree partitioning technique when the expected amount of intrablocks is less than a threshold. Partitioning at least one tile, coding unit or superfragment can include a partitioning constriction so that the cons
102/108 partitioning restriction may include predefining a first partition as forming half of at least one frame portion in a first dimension and presetting a second partition as forming half of at least one frame portion in a second dimension and so that the first dimension can include a vertical dimension and the second dimension can include a horizontal dimension.
[00191] In other examples, a video encoder may include a temporary photo storage and a graphics processing unit that has a forecasting partition generator logic circuitry. The graphics processing unit can be communicatively coupled to the temporary photo storage and the forecasting partition generator logic circuitry can be configured to receive a video frame, segment the video frame into a plurality of tiles, units coding or superfragmentation technique, determine a prediction partitioning technique chosen for at least one tile, coding unit or superfragment, so that the chosen prediction partitioning technique can include at least one of a binary tree partitioning technique or one kd tree partitioning technique and the partition of at least one tile, coding unit or superfragment into a plurality of forecast partitions using the chosen partitioning technique.
[00192] In an additionally exemplary video encoder, the graphics processing unit may include an interpreting logic system configured to generate interpreting data associated with a first individual forecasting partition from the plurality of forecasting partitions so that the data interpreting data comprise motion vector data, intra-forecast logic circuit system configured to generate
103/108 attached to a second individual preview partition from a second plurality of preview tiles from a second tile, encoding or superfragment unit from a second video frame, differentiation logic circuit system configured to differentiate a plurality of partitions predicted associated with the plurality of prediction partitions with corresponding original pixel data to generate a corresponding plurality of prediction error data partitions so that a predicted first partition of the predicted plurality of partitions is predicted based, at least in part, on a reference frame comprising an immediately previous frame of reference, an earlier frame of reference, a future frame of reference, a modified frame of reference or a synthesized frame of reference, a logic circuit system that generates coding partitions configured for determine a p individual prediction error data article from the plurality of prediction error data partitions must be encrypted and, for the individual residue that must be encrypted, the prediction error data partition partition into a plurality of encoding partitions from so that the prediction error data partitioning comprises a binary tree partitioning, indexing the plurality of encoding partitions with encoding partition index values and transmitting the encoding partitions and the encoding partition index values to the cryptographic controller, adaptive transformation logic circuit system and adaptive quantization logic circuit system configured to perform a direct transformation and quantization on the coding partitions of the individual forecast error data partition to generate data associated with the data partition given away s of individual prediction error and an adaptive entropy encoder logic circuit system configured to encrypt by entropy
104/108 the inter-forecast data associated with the first individual forecast partition, data that define the first individual forecast partition, the intra-forecast data associated with the second individual forecast partition, data that define the second individual forecast partition, and the data associated with splitting individual prediction error data into a bit stream and transmitting the bit stream. The forecasting partition generator logic circuitry can be additionally configured to segment the video frame into two or more region layers, so that the two or more region layers can include an accuracy of at least one out of 4 pixels, 8 pixels or 16 pixels, so that the two or more region layers can include region boundaries, so that segmenting the video frame into the plurality of tiles, encoding units or superfragments can include the logic circuitry of a prediction partition generator so that it is additionally configured to segment the video frame in the plurality of superfragments, so that segmenting the video frame in the plurality of tiles or coding unit or superfragments can include the logic circuitry system of prediction partition generator so that it is additionally configured to segment the video frame through the encoding performed by symbol using 16 x 16 pixel blocks and so that at least one superfragment can include an individual region layer of the two or more region layers, encode the region boundaries so that the encoding of the region boundaries can include at least one of a coding performed by symbol or generation of a code book that approximates the region boundaries in a tile grid so that the tile grid can be an equally spaced tile grid that has a at least one of 32 x 32 pixels or 64 x 64 pixels, index the plurality of
105/108 forecast partitions with the forecast partition index values and transmit the plurality of forecast partitions and partition forecast index values to an encryption controller. Segmenting the video frame into the plurality of tiles, coding units or superfragments may include that the forecasting partition generator logic circuitry is configured to segment the video frame into the plurality of tiles. Determining the chosen partitioning technique may include that the forecasting partition generator logic circuitry is configured to determine the chosen partitioning technique based, at least in part, on a type of video frame photo. The type of photo can include at least one of a photo-l (intra-photo), a photo-P (forecast photo) or a photo-F / B (functional / bidirectional photo). The type of photo can include the photo-l and the prediction partitioning technique chosen can include the k-d tree partitioning technique. The type of photo can include the P-photo and the prediction partitioning technique chosen can include the binary tree partitioning technique. The type of photo can include the F-B photo and the prediction partitioning technique chosen can include the binary tree partitioning technique. Determining the chosen partitioning technique may include that the prediction partition generator logic circuitry is configured to determine the chosen partitioning technique based, at least in part, on a characteristic of the at least one tile, coding unit or superfragment, so that the characteristic can include an expected amount of intrablocks in at least one tile, coding unit or superfragment and, so that the chosen partitioning technique can include the kd tree partitioning technique when the expected amount of intrablocks is greater than a threshold and so that the partitioning technique chosen with
106/108 understand the binary tree partitioning technique when the expected amount of intrablocks is less than a threshold. Partitioning at least one tile, coding unit or superfragment may include that the forecasting partition generator logic circuitry is configured to apply a partitioning constriction so that the partitioning constriction can include predefining a first partition as the forming half of at least one frame portion in a first dimension and pre-defining a second partition as forming a half of at least one frame portion in a second dimension and so that the first dimension can include one vertical dimension and the second dimension comprises a horizontal dimension.
[00193] In yet another example, a system may include a video decoder configured to decode an encoded bit stream. The video decoder can be configured to entropy decode the encoded bit stream to determine interpretation data associated with a first forecast partition, data defining the first forecast partition, intra-forecast data associated with a second forecast partition and data that define the second forecast partition so that the first forecast partition can include a binary tree partition and the second forecast partition comprises a kd tree partition, perform movement compensation for the first forecast partition based, at least at least in part, on the interprevision data, perform the intraprevision for the second individual partition based, at least in part, on the intraprevision data, generate a first decoded video frame based, at least in part, on the motion compensation, generate a second decoded video frame based, at least on par intra-preview and transmit the first and second decoded video frames for presentation through
107/108 of a display device.
[00194] In an additionally exemplary system, the system may also include an antenna communicatively coupled to the video decoder and configured to receive the encoded bit stream of video data and a display device configured to present video frames. The video decoder can be further configured to entropy decode the bit stream to determine data associated with an individual prediction error data partition so that the individual prediction error data partition comprises encoding and binary tree partitions, perform reverse quantization and reverse transformation based, at least in part, on the data associated with the individual prediction error data partition to generate decoded encoding partitions, combine the decoded encoding partitions to generate an error decoding data partition decoded forecast, add a forecast partition to the decoded forecast error data partition to generate a first reconstructed partition, mount the first reconstructed partition and a second reconstructed partition to generate at least one of a first tile, encoding unit or a first superfragment, apply at least one of an unblocking filtration or a quality restoration filtration to the first tile, coding unit or the first superfragment to generate a first tile, coding unit or final decoded superfragment, assemble the first tile, final decoded encoding or superfragment to a second tile, encoding unit or final decoded superfragment to generate a third decoded video frame and transmit the third decoded video frame for presentation via the display device. The first video frame can include a type of photo that comprises at least one of a
108/108 photo-P or photo-F / B. The second video frame can include a type of photo that comprises an L-photo. The third video frame can include a type of photo that comprises at least one of a P-photo or an F-B photo.
[00195] In an additional example, at least one machine-readable medium may include a plurality of instructions that, in response to being executed on a computing device, causes the computing device to perform the method in accordance with any of the examples above.
[00196] Still in an additional example, an apparatus may include means for carrying out the methods according to any of the examples above.
[00197] The examples above may include a specific combination of features. However, so that the examples above are not limited in this regard and, in various deployments, the examples above may include executing only a subset of such resources, executing a different order of such resources, executing a different combination of such resources resources and / or execution of additional resources in relation to the resources explicitly listed. For example, all the resources described in relation to the exemplary methods can be implemented in relation to the exemplary apparatus, the exemplary systems and / or the exemplary articles, and vice versa.
权利要求:
Claims (25)
[1]
1. Computer-implemented method for partitioning in video encoding, characterized by the fact that it comprises:
receive a video frame;
segment the video frame into a plurality of tiles, encoding units or superfragments;
determine a partitioning technique chosen for at least one tile, coding unit or superfragment for forecasting or coding partitioning, where the chosen partitioning technique comprises a structured partitioning technique that comprises at least one of a binary tree partitioning technique , a kd tree partitioning technique, a codebook representation of a binary tree partitioning technique or a codebook representation of a kd tree partitioning technique;
partition the at least one tile, coding unit or superfragment into a plurality of forecast partitions using the chosen partitioning technique; and encode partitioning indicators or code words associated with the plurality of forecast partitions in a bit stream.
[2]
2. Method, according to claim 1, characterized by the fact that it additionally comprises:
segmenting the video frame into two or more region layers, in which segmenting the video frame into the plurality of tiles, encoding units or superfragments comprises segmenting the video frame into the plurality of superfragments, and in which at least one superfragment comprises an individual region layer of the two or more region layers.
[3]
3. Method according to claim 1, characterized
2/20 by the fact that segmenting the video frame in the plurality of tiles, coding units or superfragments comprises segmenting the video frame in the plurality of tiles.
[4]
4. Method according to claim 1, characterized by the fact that it additionally comprises:
differentiate a plurality of predicted partitions associated with the plurality of prediction partitions with corresponding original pixel data to generate a corresponding plurality of prediction error data partitions;
determining an individual forecast error data partition from the plurality of forecast error data partitions that must be coded; and for the individual forecast error data partition that must be coded:
partitioning the forecast error data partition into a plurality of coding partitions, wherein the partitioning of the forecast error data partition comprises a binary tree partitioning;
[5]
5. Method, according to claim 1, characterized by the fact that it additionally comprises:
generate interpretation data associated with a first individual forecast partition from the plurality of forecast partitions;
generate intra-forecast data associated with a second individual forecast partition from a second plurality of forecast partitions from a second tile, encoding unit or superfragment from a second video frame;
differentiate the plurality of prediction partitions with corresponding original pixel data to generate a corresponding plurality of prediction error data partitions;
determine a prediction error data partition indi
3/20 vidual of the plurality of prediction error data partitions that must be coded;
for the individual forecast error data partition that must be coded:
partitioning the forecast error data partition into a plurality of coding partitions, wherein the partitioning of the forecast error data partition comprises a binary tree partitioning;
indexing the plurality of encoding partitions with encoding partition index values;
transmitting the encryption partitions and the encryption partition index values to the encryption controller; and performing a direct transformation and quantization on the coding partitions of the individual forecast error data partition to generate data associated with the individual forecast error data partition;
entropy code the interpretation data associated with the first individual forecast partition, data that define the first individual forecast partition, the intra-forecast data associated with the second individual forecast partition, data that define the second individual forecast partition and the associated data to partition individual prediction error data into a bit stream;
[6]
6. Method, according to claim 1, characterized by the fact that determining the chosen partitioning technique comprises determining the chosen partitioning technique based, at least in part, on a type of video frame photo, in which the type of photo comprises photo-l, and in which the prediction partitioning technique chosen comprises the kd tree partitioning technique.
[7]
7. Method according to claim 1, characterized
4/20 by the fact that determining the chosen partitioning technique comprises determining the chosen partitioning technique based, at least in part, on a type of photo in the video frame, where the type of photo comprises the P-photo, and in which the prediction partitioning technique chosen comprises the binary tree partitioning technique.
[8]
8. Method, according to claim 1, characterized by the fact that determining the chosen partitioning technique comprises determining the chosen partitioning technique based, at least in part, on a type of video frame photo, in which the type of photo comprises the F-B photo, and in which the prediction partitioning technique chosen comprises the binary tree partitioning technique.
[9]
9. Method, according to claim 1, characterized by the fact that determining the chosen partitioning technique comprises determining the chosen partitioning technique based, at least in part, on a characteristic of the at least one tile, coding unit or superfragment, where the characteristic comprises an expected amount of intrablocks in at least one tile, coding unit or superfragment, and where the chosen partitioning technique comprises the kd tree partitioning technique when the expected amount of intrablocks is greater than a threshold, and where the chosen partitioning technique comprises the binary tree partitioning technique when the expected amount of intrablocks is less than a threshold.
[10]
10. Method according to claim 1, characterized by the fact that partitioning the at least one tile, coding unit or superfragment comprises a partitioning constriction.
5/20
[11]
11. Method according to claim 1, characterized by the fact that it additionally comprises:
segment the video frame into two or more region layers, where the two or more region layers comprise an accuracy of at least one out of 4 pixels, 8 pixels or 16 pixels, where the two or more region layers comprise boundaries of region, in which to segment the video frame in the plurality of tiles, encoding units or superfragments comprises segmenting the video frame in the plurality of superfragments, in which to segment the video frame comprises encoding performed by symbol using 16 blocks x 16 pixels, and wherein the at least one superfragment comprises an individual region layer of the two or more region layers;
coding the region boundaries, where coding the region boundaries comprises at least one of the coding performed by symbol or generation of a code book that approximates the region boundaries in a tile grid, where the tile grid is a grid of equally spaced tiles that have a size of at least one among 32 x 32 pixels or 64 x 64 pixels, indexing the plurality of forecast partitions with forecast partition index values;
transmitting the plurality of prediction partitions and partition prediction index values to an encryption controller;
generate interpretation data associated with a first individual forecast partition from the plurality of forecast partitions, where the interpretation data comprises motion vector data;
generate intra-forecast data associated with a second individual forecast partition from a second plurality of forecast tiles from a second tile, coding unit or
6/20 superfragment of a second video frame;
differentiate a plurality of predicted partitions associated with the plurality of prediction partitions with corresponding original pixel data to generate a corresponding plurality of prediction error data partitions, in which a first predicted partition of the plurality of predicted partitions is predicted based on, at least at least in part, in a frame of reference comprising an immediately previous frame of reference, a more previous frame of reference, a future frame of reference, a modified frame of reference or a synthesized frame of reference;
generate an indicator of the type of reference for the first planned partition based on the frame of reference;
generate a forecast mode indicator based on a forecast mode of the first predicted partition, where the forecast mode is selected from at least one of inter, multi, intra, jump, auto or division;
encode the reference type indicator and the forecast mode in the bit stream;
determining an individual forecast error data partition from the plurality of forecast error data partitions that must be coded;
for the individual forecast error data partition that must be coded:
partitioning the forecast error data partition into a plurality of coding partitions, wherein the partitioning of the forecast error data partition comprises a binary tree partitioning;
indexing the plurality of encoding partitions with encoding partition index values;
transmit encoding partitions and index values
7/20 encryption partition for the encryption controller; and performing a direct transformation and quantization on the coding partitions of the individual forecast error data partition to generate data associated with the individual forecast error data partition;
entropy code the interpretation data associated with the first individual forecast partition, data that define the first individual forecast partition, the intra-forecast data associated with the second individual forecast partition, data that define the second individual forecast partition and the associated data to partition individual prediction error data into a bit stream;
transmit the bit stream;
receive the bit stream;
entropy decoding the bit stream to determine the interpretation data associated with the first individual forecast partition, the data that define the first individual forecast partition, the intra-forecast data associated with the second individual forecast partition, the data that define the second individual forecast partition and the data associated with the individual forecast error data partition, wherein the first individual forecast partition comprises a binary tree partition and the second individual forecast partition comprises a kd tree partition;
perform reverse quantization and reverse transformation based, at least in part, on the data associated with the individual forecast error data partition to generate decoded encoding partitions;
combining the decoded encoding partitions to generate a decoded prediction error data partition;
adding a decoded forecast error data partition to the decoded forecast error data partition to ge8 / 20 rar a first reconstructed partition;
assembling the first reconstructed partition and the second reconstructed partition to generate at least one of a first tile, a first coding unit or a first superfragment;
applying at least one of an unblocking filtration or a quality restoration filtration to the first tile, the first coding unit or the first superfragment to generate a first tile, coding unit or final decoded superfragment;
assembling the final decoded tile, encoding unit or superfragment with a second decoded tile, encoding unit or final decoded superfragment to generate a first decoded video frame;
perform motion compensation to generate a second decoded individual forecast partition based, at least in part, on the interpretation data;
perform an intra-forecast for a third individual forecast partition decoded from the second plurality of partitions based, at least in part, on the intra-forecast data;
generate a second decoded video frame based, at least in part, on motion compensation;
generate a third decoded video frame based, at least in part, on intraprevision; and transmitting the first, second and third decoded video frames for presentation via a display device, wherein segmenting the video frame into the plurality of tiles or encoding unit or superfragments comprises segmenting the video frame into the plurality of tiles,
9/20 in which determining the chosen partitioning technique comprises determining the chosen partitioning technique based, at least in part, on a type of photo in the video frame, where the type of photo comprises at least one of a photo- l (intra-photo), a P-photo (forecast photo) or an F-B photo (functional / bidirectional photo), in which the type of photo comprises the photo-l and in which the chosen forecast partitioning technique comprises the kd tree partitioning technique, in which the type of photo comprises the P-photo and in which the prediction partitioning technique chosen comprises the binary tree partitioning technique, in which the type of photo comprises the F-photo / B and where the prediction partitioning technique chosen comprises the binary tree partitioning technique, where determining the chosen partitioning technique comprises determining the chosen partitioning technique based, at least in part, and m a feature of at least one tile, coding or superfragment unit, where the feature comprises an expected amount of intrablocks in the at least one tile, coding or superfragment unit and where the chosen partitioning technique comprises the partitioning technique kd tree when the expected amount of intrablocks is greater than a threshold and where the chosen partitioning technique comprises the binary tree partitioning technique when the expected amount of intrablocks is less than a threshold, and where the partitioning of at least a tile, coding unit or superfragment comprises a partitioning constriction, where the partitioning constriction comprises
10/20 is to preset a first partition so that it halves at least a portion of the frame in a first dimension and predefines a second partition so that it halves at least a portion of the frame in a second dimension and in which the first dimension comprises a vertical dimension and the second dimension comprises a horizontal dimension.
[12]
12. Video encoder, characterized by the fact that it comprises:
a temporary photo storage;
a graphics processing unit comprising a forecasting partition generator logic circuit system, wherein the graphics processing unit is communicatively coupled to the photo temporary storage and where the partition generator logic circuit system forecasting is configured to:
receive a video frame;
segment the video frame into a plurality of tiles, encoding units or superfragments;
determine a prediction partitioning technique chosen for at least one tile, coding unit or superfragment, in which the chosen partitioning technique comprises a structured partitioning technique that comprises at least one of a binary tree partitioning technique, a kd tree partitioning, a codebook representation of a binary tree partitioning technique or a codebook representation of a kd tree partitioning technique; and partition the at least one tile, coding unit or superfragment into a plurality of forecast partitions using the chosen partitioning technique; and
11/20
[13]
13. Video encoder, according to claim 12, characterized by the fact that the forecasting partition generator logic circuit system is additionally configured for:
segment the video frame into two or more region layers, in which to segment the video frame in the plurality of tiles, coding units or superfragments comprises the forecasting partition generator logic circuitry system that is configured to the video in the plurality of superfragments and in which the at least one superfragment comprises an individual region layer of the two or more region layers.
[14]
14. Video encoder, according to claim 12, characterized by the fact that segmenting the video frame in the plurality of tiles, coding units or superfragments comprises the forecast partition generator logic circuitry system that is configured to segment the video frame in the plurality of tiles.
[15]
15. Video encoder, according to claim 12, characterized by the fact that the graphics processing unit additionally comprises:
differentiation logic circuit system configured for:
differentiate a plurality of predicted partitions associated with the plurality of prediction partitions with corresponding original pixel data to generate a corresponding plurality of prediction error data partitions;
coding partition generator logic circuits system configured for:
determining an individual forecast error data partition from the plurality of forecast error data partitions that must be coded; and
12/20 for the individual residue that must be coded: partition the forecast error data partition into a plurality of coding partitions, where the partition of the forecast error data partition comprises a binary tree partitioning;
[16]
16. Video encoder, according to claim 12, characterized by the fact that determining the chosen partitioning technique comprises the prediction partition generator logic circuitry system so that it is configured to determine the chosen partitioning technique with based, at least in part, on a type of photo of the video frame.
[17]
17. Video encoder, according to claim 12, characterized by the fact that determining the chosen partitioning technique comprises the prediction partition generator logic circuitry system so that it is configured to determine the chosen partitioning technique with based, at least in part, on a feature of at least one tile, coding unit or superfragment.
[18]
18. Video encoder, according to claim 12, characterized by the fact that the graphics processing unit additionally comprises:
Interpretation logic system configured for:
generate interpretation data associated with a first individual forecast partition from the plurality of forecast partitions, where the interpretation data comprises motion vector data;
intra-forecast logic system configured for:
generate intra-forecast data associated with a second individual forecast partition from a second plurality of forecast tiles from a second tile, coding unit or
13/20 superfragment of a second video frame;
differentiation logic circuit system configured for:
differentiate a plurality of predicted partitions associated with the plurality of prediction partitions with corresponding original pixel data to generate a corresponding plurality of prediction error data partitions, in which a first predicted partition of the plurality of predicted partitions is predicted based on, at least at least in part, in a frame of reference comprising an immediately previous frame of reference, a more previous frame of reference, a future frame of reference, a modified frame of reference or a synthesized frame of reference;
coding partition generator logic circuits system configured for:
determining an individual forecast error data partition from the plurality of forecast error data partitions that must be coded; and for the individual waste that must be coded:
partitioning the forecast error data partition into a plurality of coding partitions, wherein the partitioning of the forecast error data partition comprises a binary tree partitioning;
indexing the plurality of encoding partitions with encoding partition index values;
transmitting the encryption partitions and the encryption partition index values to the encryption controller; and adaptive transformation logic circuit system and adaptive quantization logic circuit system configured for:
perform a direct transformation and quantization on the coding partitions of the individual forecast error data partition to generate data associated with the individual forecast error data partition; and adaptive entropy encoder logic circuit system configured for:
entropy code the interpretation data associated with the first individual forecast partition, data that define the first individual forecast partition, the intra-forecast data associated with the second individual forecast partition, data that define the second individual forecast partition and the associated data to partition individual prediction error data into a bit stream; and transmit the bit stream, in which the prediction partition generator logic circuitry is additionally configured to:
segment the video frame into two or more region layers, where the two or more region layers comprise an accuracy of at least one of 4 pixels, 8 pixels or 16 pixels, where the two or more region layers comprise boundaries region, in which to segment the video frame into the plurality of tiles, coding units or superfragments comprises the forecasting partition generator logic circuitry so that it is additionally configured to segment the video frame into the plurality of superfragments, where segmenting the video frame in the plurality of tiles or coding unit or superfragments comprises the forecasting partition generator logic circuitry so that it is additionally configured to segment the video frame through the encoding performed by symbol with the use of blocks of 16 x 16 pixels and in which at least one superfragment comprises a layer of region individual of the two or more layers of region;
15/20 coding the region limits, in which coding the region limits comprises at least one among coding performed by symbol or generation of a code book that approximates the region limits in a tile grid, in which the tile grid it is an equally spaced tile grid that has a size of at least one among 32 x 32 pixels or 64 x 64 pixels, indexing the plurality of forecast partitions with forecast partition index values;
transmitting the plurality of prediction partitions and partition prediction index values to an encryption controller;
wherein segmenting the video frame into the plurality of tiles, coding units or superfragments comprises the forecasting partition generator logic circuitry system that is configured to segment the video frame into the plurality of tiles, in which to determine the technique of The chosen partitioning comprises the prediction partition generator logic circuitry system so that it is configured to determine the chosen partitioning technique based, at least in part, on a type of video frame photo.
where the type of photo comprises at least one of a photo-l (intra-photo), a photo-P (forecast photo) or a photo-F / B (functional / bidirectional photo), where the type of photo comprises the photo-l and where the chosen forecast partitioning technique comprises the kd tree partitioning technique, where the type of photo comprises the P-photo and where the chosen forecast partitioning technique comprises the tree partitioning technique binary, in which the type of photo comprises the F-B photo and in which the
16/20 prediction partitioning technique chosen comprises the binary tree partitioning technique, in which determining the chosen partitioning technique comprises the forecasting partition generator logic circuits system so that it is configured to determine the partitioning technique chosen based, at least in part, on a feature of the at least one tile, coding unit or superfragment, where the feature comprises an expected amount of intrablocks in the at least one tile, coding unit or superfragment and where the technique of partitioning chosen comprises the kd tree partitioning technique when the expected amount of intrablocks is greater than a threshold and where the chosen partitioning technique comprises the binary tree partitioning technique when the expected amount of intrablocks is less than a threshold, and where to partition the at least s a tile, coding unit or superfragment comprises the forecasting partition generator logic circuitry so that it is configured to apply a partitioning constriction, wherein the partitioning constriction comprises predefining a first partition so that it divides by half to at least a frame portion in a first dimension and preset a second partition so that it halves at least a frame portion in a second dimension and where the first dimension comprises a vertical dimension and the second dimension comprises a horizontal dimension.
[19]
19. Decoder system, characterized by the fact that it comprises:
a video decoder configured to decode an encoded bit stream, where the video decoder is
17/20 configured for:
entropy decoding the encoded bit stream to determine interpretation data associated with a first forecast partition, data defining the first forecast partition, intraprediction data associated with a second forecast partition, and data defining the second forecast partition, wherein the first forecast partition comprises a binary tree partition and the second forecast partition comprises a kd tree partition;
perform movement compensation for the first forecast partition based, at least in part, on the interprevision data;
perform intraprevision for the second individual partition based, at least in part, on intraprevision data;
generate a first decoded video frame based, at least in part, on motion compensation;
generate a second decoded video frame based, at least in part, on intra-forecasting; and transmitting the first and second decoded video frames for presentation via a display device.
[20]
20. Decoder system according to claim 19, in which the video decoder is characterized by the fact that it is additionally configured for:
entropy decoding the bit stream to determine data associated with an individual prediction error data partition, wherein the individual prediction error data partition comprises encoding and binary tree partitions;
perform reverse quantization and reverse transformation based, at least in part, on the data associated with the individual forecast error data partition to generate decoded encoding partitions;
18/20 combining the decoded encoding partitions to generate a decoded prediction error data partition;
add a forecast partition to the decoded forecast error data partition to generate a first reconstructed partition;
mount the first reconstructed partition and the second reconstructed partition to generate at least one of a first tile, coding unit or a first superfragment;
applying at least one of an unblocking filtration or quality restoration filtration to the first tile, coding unit or first superfragment to generate a first tile, coding unit or final decoded superfragment;
assembling the final decoded tile, encoding unit or superfragment with a final decoded second tile, encoding unit or superfragment to generate a third decoded video frame;
transmit the third decoded video frame for presentation through the display device.
[21]
21. Decoder system according to claim 19, characterized by the fact that the first video frame comprises a type of photo that comprises at least one of a P-photo or an F-B photo.
[22]
22. Decoder system according to claim 19, characterized in that the second video frame comprises a type of photo comprising an L-photo
[23]
23. Decoder system according to the claim
19, characterized by the fact that it additionally comprises:
an antenna communicatively coupled to the video decoder and configured to receive the encoded bit stream from
19/20 video data; and a display device configured to display video frames, where the video decoder is additionally configured to:
entropy decoding the bit stream to determine data associated with an individual prediction error data partition, wherein the individual prediction error data partition comprises encoding and binary tree partitions;
perform reverse quantization and reverse transformation based, at least in part, on the data associated with the individual forecast error data partition to generate decoded encoding partitions;
combining the decoded encoding partitions to generate a decoded prediction error data partition;
add a forecast partition to the decoded forecast error data partition to generate a first reconstructed partition;
mount the first reconstructed partition and the second reconstructed partition to generate at least one of a first tile, coding unit or a first superfragment;
applying at least one of an unblocking filtration or quality restoration filtration to the first tile, coding unit or first superfragment to generate a first tile, coding unit or final decoded superfragment;
assembling the final decoded tile, encoding unit or superfragment with a final decoded second tile, encoding unit or superfragment to generate a third decoded video frame;
20/20 transmitting a third decoded video frame for presentation through the display device, in which the first video frame comprises a type of photo that comprises at least one of a P-photo or a F / B photo, and in which the the second video frame comprises a type of photo which comprises a photo-1, and the third video frame comprises a type of photo which comprises at least one of a P-photo or a photo F / B.
[24]
24. At least one machine-readable medium, characterized by the fact that it comprises:
a plurality of instructions that, in response to being executed on a computing device, cause the computing device to perform the method, as defined in any of claims 1 to 11.
[25]
25. Apparatus, characterized by the fact that it comprises: means for carrying out the methods, as defined in any of claims 1 to 11.
类似技术:
公开号 | 公开日 | 专利标题
BR112015015575A2|2020-02-04|adaptive content partitioning for next-generation video prediction and coding
ES2804515T3|2021-02-08|Qp derivation and deviation for adaptive color transform in video encoding
US9819965B2|2017-11-14|Content adaptive transform coding for next generation video
US20140286415A1|2014-09-25|Video encoding/decoding method and apparatus for same
BR112020026686A2|2021-03-30|SYSTEM AND METHOD FOR VIDEO ENCODING
BR112020025145A2|2021-07-20|unlock filter for subpartition boundaries caused by intra-subpartition encoding tool
KR20210018490A|2021-02-17|Encoder, decoder and corresponding methods using IBC dedicated buffer and default value refreshing for luma and chroma components
WO2020177520A1|2020-09-10|An encoder, a decoder and corresponding methods using ibc search range optimization for abitrary ctu size
WO2020169114A1|2020-08-27|Method and apparatus for affine based inter prediction of chroma subblocks
WO2021057755A1|2021-04-01|An encoder, a decoder and corresponding methods of complexity reduction on intra prediction for the planar mode
WO2021159962A1|2021-08-19|An encoder, a decoder and corresponding methods for subpicture signalling in sequence parameter set
WO2020108640A1|2020-06-04|Encoder, decoder and corresponding methods of most probable mode list construction for blocks with multi-hypothesis prediction
WO2021008470A1|2021-01-21|An encoder, a decoder and corresponding methods
BR112021012708A2|2021-09-14|CROSS COMPONENT LINEAR MODELING METHOD AND APPARATUS FOR INTRA PREDICTION
BR112021009833A2|2021-08-17|encoder, decoder and corresponding methods for inter-prediction.
BR112020026183A2|2021-09-08|VIDEO ENCODING METHOD, ENCODER, DECODER AND COMPUTER PROGRAM PRODUCT
WO2020251416A2|2020-12-17|Affine motion model restrictions reducing number of fetched reference lines during processing of one block row with enhanced interpolation filter
BR112021003946A2|2021-05-18|video encoder, video decoder and corresponding methods
BR112021003999A2|2021-05-25|relationship between partition constraint elements
CA3132225A1|2020-10-22|An encoder, a decoder and corresponding methods harmonzting matrix-based intra prediction and secoundary transform core selection
WO2021045655A9|2021-06-17|Method and apparatus for intra prediction
WO2020149769A1|2020-07-23|An encoder, a decoder and corresponding methods for local illumination compensation
BR112021011723A2|2021-08-31|INTRA PREDICTION METHOD AND APPARATUS AND ENCODER, DECODER, COMPUTER PROGRAM, NON-TRANSITORY STORAGE MEDIA, AND BITS FLOW
CA3131027A1|2020-08-27|Method and apparatus for intra prediction using linear model
同族专利:
公开号 | 公开日
WO2014120367A1|2014-08-07|
US9762911B2|2017-09-12|
EP2952002A4|2016-09-21|
CN105052140B|2019-01-15|
CN104737542B|2018-09-25|
US20150319441A1|2015-11-05|
US20160127741A1|2016-05-05|
US20170318297A1|2017-11-02|
EP2951999A4|2016-07-20|
US10009610B2|2018-06-26|
EP3008900A4|2017-03-15|
WO2014120656A1|2014-08-07|
KR20170066712A|2017-06-14|
EP2952004A4|2016-09-14|
EP3013053A2|2016-04-27|
US10021392B2|2018-07-10|
US9973757B2|2018-05-15|
WO2014120960A1|2014-08-07|
EP3008900A2|2016-04-20|
WO2014120375A2|2014-08-07|
JP2016506187A|2016-02-25|
EP2951999A1|2015-12-09|
CN104885467A|2015-09-02|
EP2951995A1|2015-12-09|
EP3013053A3|2016-07-20|
WO2014120369A1|2014-08-07|
JP6163674B2|2017-07-19|
JP2016508327A|2016-03-17|
US20150229926A1|2015-08-13|
CN105453570A|2016-03-30|
CN104885455A|2015-09-02|
CN105191309A|2015-12-23|
EP2952001A4|2016-09-14|
KR20150090194A|2015-08-05|
US20170006284A1|2017-01-05|
WO2014120368A1|2014-08-07|
EP2952003A4|2016-09-14|
US20140362922A1|2014-12-11|
CN104885470A|2015-09-02|
US9973758B2|2018-05-15|
RU2612600C2|2017-03-09|
EP2996338A3|2016-07-06|
CN104718756B|2019-02-15|
US20150036737A1|2015-02-05|
CN104885470B|2018-08-07|
CN105556964A|2016-05-04|
EP2952003A1|2015-12-09|
CN104885467B|2018-08-17|
CN105556964B|2019-02-01|
WO2014120374A1|2014-08-07|
WO2015099814A1|2015-07-02|
JP6286718B2|2018-03-07|
KR101770047B1|2017-08-21|
US9794569B2|2017-10-17|
US9787990B2|2017-10-10|
KR20150090178A|2015-08-05|
WO2014120575A1|2014-08-07|
EP2952002A1|2015-12-09|
CN104737542A|2015-06-24|
EP2952004A1|2015-12-09|
US10284853B2|2019-05-07|
US9686551B2|2017-06-20|
KR20150055005A|2015-05-20|
RU2015126241A|2017-01-11|
US20160277739A1|2016-09-22|
EP2951998A1|2015-12-09|
JP2016514378A|2016-05-19|
EP2951994A1|2015-12-09|
EP2951994A4|2016-10-12|
US20150319442A1|2015-11-05|
CN104718756A|2015-06-17|
CN104885471A|2015-09-02|
JP6339099B2|2018-06-06|
EP2996338A2|2016-03-16|
US9609330B2|2017-03-28|
KR20150090206A|2015-08-05|
KR102063385B1|2020-01-07|
WO2014120373A1|2014-08-07|
KR20150058324A|2015-05-28|
CN104885471B|2019-06-28|
EP2951998A4|2016-09-28|
WO2014120375A3|2016-03-17|
WO2014120987A1|2014-08-07|
US20150373328A1|2015-12-24|
CN104885455B|2019-02-22|
EP2952001A1|2015-12-09|
US20180176577A1|2018-06-21|
CN105453570B|2020-03-17|
EP2952003B1|2019-07-17|
CN105191309B|2018-08-10|
EP2951995A4|2016-09-21|
US20150016523A1|2015-01-15|
US20160277738A1|2016-09-22|
US20150010062A1|2015-01-08|
EP2951993A1|2015-12-09|
US10284852B2|2019-05-07|
CN105052140A|2015-11-11|
EP2951993A4|2016-09-14|
US9794568B2|2017-10-17|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

KR0151011B1|1994-11-30|1998-10-01|김광호|A bipolar transistor and method of manufacturing the same|
US6542547B1|1995-05-30|2003-04-01|Texas Instruments Incorporated|Efficient heuristic based motion estimation method for video compression|
US5864711A|1995-07-05|1999-01-26|Microsoft Corporation|System for determining more accurate translation between first and second translator, and providing translated data to second computer if first translator is more accurate|
US5729691A|1995-09-29|1998-03-17|Intel Corporation|Two-stage transform for video signals|
ES2153597T3|1995-10-25|2001-03-01|Koninkl Philips Electronics Nv|SYSTEM AND METHOD OF SEGMENTED IMAGE CODING, AND CORRESPONDING DECODING SYSTEM AND METHOD.|
US6160846A|1995-10-25|2000-12-12|Sarnoff Corporation|Apparatus and method for optimizing the rate control in a coding system|
US6895051B2|1998-10-15|2005-05-17|Nokia Mobile Phones Limited|Video data encoder and decoder|
JP3743837B2|1996-06-14|2006-02-08|株式会社大宇エレクトロニクス|Run-length encoder|
US5953506A|1996-12-17|1999-09-14|Adaptive Media Technologies|Method and apparatus that provides a scalable media delivery system|
WO1998031152A2|1997-01-13|1998-07-16|Koninklijke Philips Electronics N.V.|Embedding supplemental data in a digital video signal|
US6208693B1|1997-02-14|2001-03-27|At&T Corp|Chroma-key for efficient and low complexity shape representation of coded arbitrary video objects|
JP3843581B2|1998-03-05|2006-11-08|富士ゼロックス株式会社|Image encoding device, image decoding device, image processing device, image encoding method, image decoding method, and image processing method|
JP2000050263A|1998-07-28|2000-02-18|Hitachi Ltd|Image coder, decoder and image-pickup device using them|
US6223162B1|1998-12-14|2001-04-24|Microsoft Corporation|Multi-level run length coding for frequency-domain audio coding|
US7065253B2|1999-09-03|2006-06-20|Intel Corporation|Wavelet zerotree coding of ordered bits|
US7792390B2|2000-12-19|2010-09-07|Altera Corporation|Adaptive transforms|
US20020122491A1|2001-01-03|2002-09-05|Marta Karczewicz|Video decoder architecture and method for using same|
ES2687176T3|2001-11-06|2018-10-24|Panasonic Intellectual Property Corporation Of America|Encoding method of moving images and decoding method of moving images|
US7453936B2|2001-11-09|2008-11-18|Sony Corporation|Transmitting apparatus and method, receiving apparatus and method, program and recording medium, and transmitting/receiving system|
GB2382940A|2001-11-27|2003-06-11|Nokia Corp|Encoding objects and background blocks|
CN101448162B|2001-12-17|2013-01-02|微软公司|Method for processing video image|
JP4610195B2|2001-12-17|2011-01-12|マイクロソフトコーポレーション|Skip macroblock coding|
JP2004088722A|2002-03-04|2004-03-18|Matsushita Electric Ind Co Ltd|Motion picture encoding method and motion picture decoding method|
MXPA04010318A|2002-04-23|2005-02-03|Nokia Corp|Method and device for indicating quantizer parameters in a video coding system.|
JP2003324731A|2002-04-26|2003-11-14|Sony Corp|Encoder, decoder, image processing apparatus, method and program for them|
US20040001546A1|2002-06-03|2004-01-01|Alexandros Tourapis|Spatiotemporal prediction for bidirectionally predictive pictures and motion vector prediction for multi-picture reference motion compensation|
JP4767992B2|2002-06-06|2011-09-07|パナソニック株式会社|Variable length encoding method and variable length decoding method|
US7729563B2|2002-08-28|2010-06-01|Fujifilm Corporation|Method and device for video image processing, calculating the similarity between video frames, and acquiring a synthesized frame by synthesizing a plurality of contiguous sampled frames|
JP3997171B2|2003-03-27|2007-10-24|株式会社エヌ・ティ・ティ・ドコモ|Moving picture encoding apparatus, moving picture encoding method, moving picture encoding program, moving picture decoding apparatus, moving picture decoding method, and moving picture decoding program|
HU0301368A3|2003-05-20|2005-09-28|Amt Advanced Multimedia Techno|Method and equipment for compressing motion picture data|
JP2004007778A|2003-07-14|2004-01-08|Victor Co Of Japan Ltd|Motion compensating and decoding method|
US20050094729A1|2003-08-08|2005-05-05|Visionflow, Inc.|Software and hardware partitioning for multi-standard video compression and decompression|
CN1332563C|2003-12-31|2007-08-15|中国科学院计算技术研究所|Coding method of video frequency image jump over macro block|
US7492820B2|2004-02-06|2009-02-17|Apple Inc.|Rate control for video coder employing adaptive linear regression bits modeling|
EP1571850A3|2004-03-05|2006-12-13|Samsung Electronics Co., Ltd.|Apparatus and method for encoding and decoding image containing grayscale alpha channel image|
KR100654431B1|2004-03-08|2006-12-06|삼성전자주식회사|Method for scalable video coding with variable GOP size, and scalable video coding encoder for the same|
US7461525B2|2004-03-13|2008-12-09|Wilson Rodney W|Tile sponge washing and conditioning apparatus|
JP2005301457A|2004-04-07|2005-10-27|Fuji Xerox Co Ltd|Image processor, program, and recording medium|
US7689051B2|2004-04-15|2010-03-30|Microsoft Corporation|Predictive lossless coding of images and video|
CN102833538B|2004-06-27|2015-04-22|苹果公司|Multi-pass video encoding|
KR100664932B1|2004-10-21|2007-01-04|삼성전자주식회사|Video coding method and apparatus thereof|
EP1675402A1|2004-12-22|2006-06-28|Thomson Licensing|Optimisation of a quantisation matrix for image and video coding|
WO2007011851A2|2005-07-15|2007-01-25|Texas Instruments Incorporated|Filtered and warped motion compensation|
JP2006270301A|2005-03-23|2006-10-05|Nippon Hoso Kyokai <Nhk>|Scene change detecting apparatus and scene change detection program|
WO2006109974A1|2005-04-13|2006-10-19|Samsung Electronics Co., Ltd.|Method for entropy coding and decoding having improved coding efficiency and apparatus for providing the same|
KR100703773B1|2005-04-13|2007-04-06|삼성전자주식회사|Method and apparatus for entropy coding and decoding, with improved coding efficiency, and method and apparatus for video coding and decoding including the same|
US7397933B2|2005-05-27|2008-07-08|Microsoft Corporation|Collusion resistant desynchronization for digital video fingerprinting|
US8064516B2|2005-06-02|2011-11-22|Broadcom Corporation|Text recognition during video compression|
KR20070006445A|2005-07-08|2007-01-11|삼성전자주식회사|Method and apparatus for hybrid entropy encoding and decoding|
CN101263513A|2005-07-15|2008-09-10|德克萨斯仪器股份有限公司|Filtered and warpped motion compensation|
US9077960B2|2005-08-12|2015-07-07|Microsoft Corporation|Non-zero coefficient block pattern coding|
US9258519B2|2005-09-27|2016-02-09|Qualcomm Incorporated|Encoder assisted frame rate up conversion using various motion models|
WO2007063808A1|2005-11-30|2007-06-07|Kabushiki Kaisha Toshiba|Image encoding/image decoding method and image encoding/image decoding apparatus|
US8243804B2|2005-12-01|2012-08-14|Lsi Corporation|Hierarchical motion estimation for images with varying horizontal and/or vertical dimensions|
US8265145B1|2006-01-13|2012-09-11|Vbrick Systems, Inc.|Management and selection of reference frames for long term prediction in motion estimation|
US8279928B2|2006-05-09|2012-10-02|Canon Kabushiki Kaisha|Image encoding apparatus and encoding method, image decoding apparatus and decoding method|
US8275045B2|2006-07-12|2012-09-25|Qualcomm Incorporated|Video compression using adaptive variable length codes|
TW200820791A|2006-08-25|2008-05-01|Lg Electronics Inc|A method and apparatus for decoding/encoding a video signal|
CN101507280B|2006-08-25|2012-12-26|汤姆逊许可公司|Methods and apparatus for reduced resolution partitioning|
US20080075173A1|2006-09-22|2008-03-27|Texas Instruments Incorporated|Systems and Methods for Context Adaptive Video Data Preparation|
US20100040146A1|2006-09-22|2010-02-18|Beibei Wang|Method and apparatus for multiple pass video coding and decoding|
CN105392006A|2006-11-08|2016-03-09|汤姆逊许可证公司|Methods and apparatus for in-loop de-artifact filtering|
US7460725B2|2006-11-09|2008-12-02|Calista Technologies, Inc.|System and method for effectively encoding and decoding electronic information|
US8875199B2|2006-11-13|2014-10-28|Cisco Technology, Inc.|Indicating picture usefulness for playback optimization|
EP1926321A1|2006-11-27|2008-05-28|Matsushita Electric Industrial Co., Ltd.|Hybrid texture representation|
US8396118B2|2007-03-19|2013-03-12|Sony Corporation|System and method to control compressed video picture quality for a given average bit rate|
TWI338509B|2007-03-28|2011-03-01|Univ Nat Central|
KR101336445B1|2007-06-08|2013-12-04|삼성전자주식회사|Method for rate control in video encoding|
US8571104B2|2007-06-15|2013-10-29|Qualcomm, Incorporated|Adaptive coefficient scanning in video coding|
US8437564B2|2007-08-07|2013-05-07|Ntt Docomo, Inc.|Image and video compression using sparse orthonormal transforms|
GB0716158D0|2007-08-17|2007-09-26|Imagination Tech Ltd|Data compression|
US8526489B2|2007-09-14|2013-09-03|General Instrument Corporation|Personal video recorder|
HUE037450T2|2007-09-28|2018-09-28|Dolby Laboratories Licensing Corp|Treating video information|
EP2191651A1|2007-09-28|2010-06-02|Dolby Laboratories Licensing Corporation|Video compression and tranmission techniques|
US20090135901A1|2007-11-28|2009-05-28|The Hong Kong University Of Science And Technology|Complexity adaptive video encoding using multiple reference frames|
US8149915B1|2007-11-29|2012-04-03|Lsi Corporation|Refinement of motion vectors in hierarchical motion estimation|
US20090154567A1|2007-12-13|2009-06-18|Shaw-Min Lei|In-loop fidelity enhancement for video compression|
JP5203717B2|2007-12-19|2013-06-05|パナソニック株式会社|Encoder, decoder, encoding method, and decoding method|
JP4309453B2|2007-12-26|2009-08-05|株式会社東芝|Interpolated frame generating apparatus, interpolated frame generating method, and broadcast receiving apparatus|
US20090167775A1|2007-12-30|2009-07-02|Ning Lu|Motion estimation compatible with multiple standards|
US8126054B2|2008-01-09|2012-02-28|Motorola Mobility, Inc.|Method and apparatus for highly scalable intraframe video coding|
CN101272494B|2008-01-25|2011-06-08|浙江大学|Video encoding/decoding method and device using synthesized reference frame|
US8798137B2|2008-02-29|2014-08-05|City University Of Hong Kong|Bit rate estimation in data or video compression|
US8254469B2|2008-05-07|2012-08-28|Kiu Sha Management Liability Company|Error concealment for frame loss in multiple description coding|
WO2009141011A1|2008-05-22|2009-11-26|Telefonaktiebolaget L M Ericsson |Content adaptive video encoder and coding method|
US8897359B2|2008-06-03|2014-11-25|Microsoft Corporation|Adaptive quantization for enhancement layer video coding|
JP2010016454A|2008-07-01|2010-01-21|Sony Corp|Image encoding apparatus and method, image decoding apparatus and method, and program|
TWI359617B|2008-07-03|2012-03-01|Univ Nat Taiwan|Low-complexity and high-quality error concealment|
US8625681B2|2008-07-09|2014-01-07|Intel Corporation|Rate-distortion cost reducing video encoding techniques|
US8385404B2|2008-09-11|2013-02-26|Google Inc.|System and method for video encoding using constructed reference frame|
US8503527B2|2008-10-03|2013-08-06|Qualcomm Incorporated|Video coding with large macroblocks|
KR101441903B1|2008-10-16|2014-09-24|에스케이텔레콤 주식회사|Reference Frame Creating Method and Apparatus and Video Encoding/Decoding Method and Apparatus Using Same|
JP4427600B1|2008-11-28|2010-03-10|株式会社東芝|Video analysis apparatus and program|
US8774559B2|2009-01-19|2014-07-08|Sharp Laboratories Of America, Inc.|Stereoscopic dynamic range image sequence|
TWI498003B|2009-02-02|2015-08-21|Thomson Licensing|Method for decoding a stream representative of a sequence of pictures, method for coding a sequence of pictures and coded data structure|
KR20100095992A|2009-02-23|2010-09-01|한국과학기술원|Method for encoding partitioned block in video encoding, method for decoding partitioned block in video decoding and recording medium implementing the same|
US9110849B2|2009-04-15|2015-08-18|Qualcomm Incorporated|Computing even-sized discrete cosine transforms|
EP2271102A1|2009-06-29|2011-01-05|Thomson Licensing|Adaptive residual image coding|
US20120127003A1|2009-08-06|2012-05-24|Youji Shibahara|Coding method, decoding method, coding apparatus, and decoding apparatus|
US8379718B2|2009-09-02|2013-02-19|Sony Computer Entertainment Inc.|Parallel digital picture encoding|
US8711938B2|2009-09-04|2014-04-29|Sharp Laboratories Of America, Inc.|Methods and systems for motion estimation with nonlinear motion-field smoothing|
JP2011066844A|2009-09-18|2011-03-31|Toshiba Corp|Parallel decoding device, program, and parallel decoding method of coded data|
JPWO2011039931A1|2009-09-30|2013-02-21|三菱電機株式会社|Image encoding device, image decoding device, image encoding method, and image decoding method|
US8705623B2|2009-10-02|2014-04-22|Texas Instruments Incorporated|Line-based compression for digital image data|
EP2312854A1|2009-10-15|2011-04-20|Siemens Aktiengesellschaft|Method for coding symbols from a digital image sequence|
EP2494780B1|2009-10-29|2020-09-02|Vestel Elektronik Sanayi ve Ticaret A.S.|Method and device for processing a video sequence|
US20110109721A1|2009-11-06|2011-05-12|Sony Corporation|Dynamic reference frame reordering for frame sequential stereoscopic video encoding|
US9473792B2|2009-11-06|2016-10-18|Texas Instruments Incorporated|Method and system to improve the performance of a video encoder|
KR20120086232A|2011-01-25|2012-08-02|휴맥스|Method for encoding/decoding video for rate-distortion optimization and apparatus for performing the same|
US9819358B2|2010-02-19|2017-11-14|Skype|Entropy encoding based on observed frequency|
US8559511B2|2010-03-30|2013-10-15|Hong Kong Applied Science and Technology Research Institute Company Limited|Method and apparatus for video coding by ABT-based just noticeable difference model|
US20120281759A1|2010-03-31|2012-11-08|Lidong Xu|Power efficient motion estimation techniques for video encoding|
JP5377395B2|2010-04-02|2013-12-25|日本放送協会|Encoding device, decoding device, and program|
KR20110112168A|2010-04-05|2011-10-12|삼성전자주식회사|Method and apparatus for video encoding based on internal bitdepth increment, method and apparatus for video decoding based on internal bitdepth increment|
JP5393573B2|2010-04-08|2014-01-22|株式会社Nttドコモ|Moving picture predictive coding apparatus, moving picture predictive decoding apparatus, moving picture predictive coding method, moving picture predictive decoding method, moving picture predictive coding program, and moving picture predictive decoding program|
KR101752418B1|2010-04-09|2017-06-29|엘지전자 주식회사|A method and an apparatus for processing a video signal|
US8410959B2|2010-04-09|2013-04-02|Qualcomm, Incorporated|Variable length codes for coding of video data|
US8942282B2|2010-04-12|2015-01-27|Qualcomm Incorporated|Variable length coding of coded block pattern in video compression|
WO2011127961A1|2010-04-13|2011-10-20|Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e. V.|Adaptive image filtering method and apparatus|
DK2559245T3|2010-04-13|2015-08-24|Ge Video Compression Llc|Video Coding using multitræsunderinddeling Images|
CN103119849B|2010-04-13|2017-06-16|弗劳恩霍夫应用研究促进协会|Probability interval partition encoding device and decoder|
US20110255594A1|2010-04-15|2011-10-20|Soyeb Nagori|Rate Control in Video Coding|
KR101444691B1|2010-05-17|2014-09-30|에스케이텔레콤 주식회사|Reference Frame Composing and Indexing Apparatus and Method|
WO2011150109A1|2010-05-26|2011-12-01|Qualcomm Incorporated|Camera parameter- assisted video frame rate up conversion|
KR20110135787A|2010-06-11|2011-12-19|삼성전자주식회사|Image/video coding and decoding system and method using edge-adaptive transform|
US9055305B2|2011-01-09|2015-06-09|Mediatek Inc.|Apparatus and method of sample adaptive offset for video coding|
US20110310976A1|2010-06-17|2011-12-22|Qualcomm Incorporated|Joint Coding of Partition Information in Video Coding|
WO2011158225A1|2010-06-17|2011-12-22|Mirtemis Ltd.|System and method for enhancing images|
US8934540B2|2010-07-20|2015-01-13|Cisco Technology, Inc.|Video compression using multiple variable length coding methods for multiple types of transform coefficient blocks|
WO2012025790A1|2010-08-26|2012-03-01|Freescale Semiconductor, Inc.|Video processing system and method for parallel processing of video data|
CN108668137A|2010-09-27|2018-10-16|Lg 电子株式会社|Method for dividing block and decoding device|
JP2012080213A|2010-09-30|2012-04-19|Mitsubishi Electric Corp|Moving image encoding apparatus, moving image decoding apparatus, moving image encoding method and moving image decoding method|
US8885704B2|2010-10-01|2014-11-11|Qualcomm Incorporated|Coding prediction modes in video coding|
US8913666B2|2010-10-01|2014-12-16|Qualcomm Incorporated|Entropy coding coefficients using a joint context model|
US9628821B2|2010-10-01|2017-04-18|Apple Inc.|Motion compensation using decoder-defined vector quantized interpolation filters|
US8750373B2|2010-10-04|2014-06-10|Vidyo, Inc.|Delay aware rate control in the context of hierarchical P picture coding|
US8953690B2|2011-02-16|2015-02-10|Google Technology Holdings LLC|Method and system for processing video data|
EP2630799A4|2010-10-20|2014-07-02|Nokia Corp|Method and device for video coding and decoding|
US8873627B2|2010-12-07|2014-10-28|Mediatek Inc|Method and apparatus of video coding using picture structure with low-delay hierarchical B group|
US9462280B2|2010-12-21|2016-10-04|Intel Corporation|Content adaptive quality restoration filtering for high efficiency video coding|
US8761245B2|2010-12-21|2014-06-24|Intel Corporation|Content adaptive motion compensation filtering for high efficiency video coding|
CN102572419B|2010-12-28|2014-09-03|深圳市云宙多媒体技术有限公司|Interframe predicting method and device|
CN103416062A|2011-01-07|2013-11-27|三星电子株式会社|Video prediction method capable of performing bilateral prediction and unilateral prediction and a device thereof, video encoding method and device thereof, and video decoding method and device thereof|
US10027957B2|2011-01-12|2018-07-17|Sun Patent Trust|Methods and apparatuses for encoding and decoding video using multiple reference pictures|
US9602819B2|2011-01-31|2017-03-21|Apple Inc.|Display quality in a variable resolution video coder/decoder system|
GB2488159B|2011-02-18|2017-08-16|Advanced Risc Mach Ltd|Parallel video decoding|
JP2012186763A|2011-03-08|2012-09-27|Mitsubishi Electric Corp|Video encoding device, video decoding device, video encoding method, and video decoding method|
US9848197B2|2011-03-10|2017-12-19|Qualcomm Incorporated|Transforms in video coding|
US9154799B2|2011-04-07|2015-10-06|Google Inc.|Encoding and decoding motion via image segmentation|
US20120262545A1|2011-04-18|2012-10-18|Paul Kerbiriou|Method for coding and decoding a 3d video signal and corresponding devices|
US9247249B2|2011-04-20|2016-01-26|Qualcomm Incorporated|Motion vector prediction in video coding|
WO2012144876A2|2011-04-21|2012-10-26|한양대학교 산학협력단|Method and apparatus for encoding/decoding images using a prediction method adopting in-loop filtering|
US8494290B2|2011-05-05|2013-07-23|Mitsubishi Electric Research Laboratories, Inc.|Method for coding pictures using hierarchical transform units|
US8989270B2|2011-06-23|2015-03-24|Apple Inc.|Optimized search for reference frames in predictive video coding system|
EP2727355A1|2011-06-29|2014-05-07|Motorola Mobility LLC|Methods and system for using a scan coding pattern during intra coding|
NO335667B1|2011-06-29|2015-01-19|Cisco Systems Int Sarl|Method of video compression|
MY164252A|2011-07-01|2017-11-30|Samsung Electronics Co Ltd|Method and apparatus for entropy encoding using hierarchical data unit, and method and apparatus for decoding|
KR101362696B1|2011-10-19|2014-02-17|전북대학교산학협력단|Signal transformation apparatus applied hybrid architecture, signal transformation method, and recording medium|
KR20130045425A|2011-10-25|2013-05-06|케이테크|Expert recommendation search method based on social ontology|
TW201842453A|2011-11-07|2018-12-01|美商Vid衡器股份有限公司|Video And Data Processing Using Even-Odd Integer Transforms|
US20130223524A1|2012-02-29|2013-08-29|Microsoft Corporation|Dynamic insertion of synchronization predicted video frames|
US9912944B2|2012-04-16|2018-03-06|Qualcomm Incorporated|Simplified non-square quadtree transforms for video coding|
KR101327078B1|2012-04-27|2013-11-07|엘지이노텍 주식회사|Camera and method for processing image|
KR101677406B1|2012-11-13|2016-11-29|인텔 코포레이션|Video codec architecture for next generation video|
US9819965B2|2012-11-13|2017-11-14|Intel Corporation|Content adaptive transform coding for next generation video|
US9743091B2|2012-12-17|2017-08-22|Lg Electronics Inc.|Method for encoding/decoding image, and device using same|
EP2951999A4|2013-01-30|2016-07-20|Intel Corp|Content adaptive parametric transforms for coding for next generation video|US5453601A|1991-11-15|1995-09-26|Citibank, N.A.|Electronic-monetary system|
US5799087A|1994-04-28|1998-08-25|Citibank, N.A.|Electronic-monetary system|
PT2320659E|2002-08-08|2014-11-13|Panasonic Ip Corp America|Moving picture encoding method and decoding method|
WO2012144876A2|2011-04-21|2012-10-26|한양대학교 산학협력단|Method and apparatus for encoding/decoding images using a prediction method adopting in-loop filtering|
US9591302B2|2012-07-02|2017-03-07|Microsoft Technology Licensing, Llc|Use of chroma quantization parameter offsets in deblocking|
US9414054B2|2012-07-02|2016-08-09|Microsoft Technology Licensing, Llc|Control and use of chroma quantization parameter values|
CN102883163B|2012-10-08|2014-05-28|华为技术有限公司|Method and device for building motion vector lists for prediction of motion vectors|
US9819965B2|2012-11-13|2017-11-14|Intel Corporation|Content adaptive transform coding for next generation video|
EP2951999A4|2013-01-30|2016-07-20|Intel Corp|Content adaptive parametric transforms for coding for next generation video|
US10057599B2|2014-02-19|2018-08-21|Mediatek Inc.|Method for performing image processing control with aid of predetermined tile packing, associated apparatus and associated non-transitory computer readable medium|
CN103929642B|2014-04-24|2017-04-12|北京航空航天大学|Method for rapidly calculating deviation value of entropy coding context model of HEVC transformation coefficients|
US9723377B2|2014-04-28|2017-08-01|Comcast Cable Communications, Llc|Video management|
US10397574B2|2014-05-12|2019-08-27|Intel Corporation|Video coding quantization parameter determination suitable for video conferencing|
US20170249521A1|2014-05-15|2017-08-31|Arris Enterprises, Inc.|Automatic video comparison of the output of a video decoder|
US11064204B2|2014-05-15|2021-07-13|Arris Enterprises Llc|Automatic video comparison of the output of a video decoder|
TWI689148B|2016-05-17|2020-03-21|英屬開曼群島商鴻騰精密科技股份有限公司|Electrical connector assembly|
US10057593B2|2014-07-08|2018-08-21|Brain Corporation|Apparatus and methods for distance estimation using stereo imagery|
US9699464B2|2014-07-10|2017-07-04|Intel Corporation|Adaptive bitrate streaming for wireless video|
US9549188B2|2014-07-30|2017-01-17|Intel Corporation|Golden frame selection in video coding|
CN104301724B|2014-10-17|2017-12-01|华为技术有限公司|Method for processing video frequency, encoding device and decoding device|
TWI536799B|2014-10-29|2016-06-01|虹光精密工業股份有限公司|Smart copy apparatus|
WO2016090568A1|2014-12-10|2016-06-16|Mediatek Singapore Pte. Ltd.|Binary tree block partitioning structure|
US10171807B2|2015-01-29|2019-01-01|Arris Enterprises Llc|Picture-level QP rate control for HEVC encoding|
US10410398B2|2015-02-20|2019-09-10|Qualcomm Incorporated|Systems and methods for reducing memory bandwidth using low quality tiles|
CN104780377B|2015-03-18|2017-12-15|同济大学|A kind of parallel HEVC coded systems and method based on Distributed Computer System|
CN108028942B|2015-06-04|2020-06-26|清华大学|Pixel prediction method, encoding method, decoding method, device thereof, and storage medium|
US9942552B2|2015-06-12|2018-04-10|Intel Corporation|Low bitrate video coding|
US10354290B2|2015-06-16|2019-07-16|Adobe, Inc.|Generating a shoppable video|
WO2016206748A1|2015-06-25|2016-12-29|Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.|Refinement of a low-pel resolution motion estimation vector|
US10764574B2|2015-07-01|2020-09-01|Panasonic Intellectual Property Management Co., Ltd.|Encoding method, decoding method, encoding apparatus, decoding apparatus, and encoding and decoding apparatus|
US10419512B2|2015-07-27|2019-09-17|Samsung Display Co., Ltd.|System and method of transmitting display data|
US11076153B2|2015-07-31|2021-07-27|Stc.Unm|System and methods for joint and adaptive control of rate, quality, and computational complexity for video coding and video delivery|
US20170060376A1|2015-08-30|2017-03-02|Gaylord Yu|Displaying HDMI Content in a Tiled Window|
US10430031B2|2015-08-30|2019-10-01|EVA Automation, Inc.|Displaying HDMI content in a tiled window|
FR3040849A1|2015-09-04|2017-03-10|StmicroelectronicsSas|METHOD FOR COMPRESSING A VIDEO DATA STREAM|
US9971791B2|2015-09-16|2018-05-15|Adobe Systems Incorporated|Method and apparatus for clustering product media files|
US10511839B2|2015-09-23|2019-12-17|Lg Electronics Inc.|Image encoding/decoding method and device for same|
US10674146B2|2015-09-30|2020-06-02|Lg Electronics Inc.|Method and device for coding residual signal in video coding system|
US10805627B2|2015-10-15|2020-10-13|Cisco Technology, Inc.|Low-complexity method for generating synthetic reference frames in video coding|
US9998745B2|2015-10-29|2018-06-12|Microsoft Technology Licensing, Llc|Transforming video bit streams for parallel processing|
US20180316914A1|2015-10-30|2018-11-01|Sony Corporation|Image processing apparatus and method|
EP3166313A1|2015-11-09|2017-05-10|Thomson Licensing|Encoding and decoding method and corresponding devices|
US10283094B1|2015-11-09|2019-05-07|Marvell International Ltd.|Run-length compression and decompression of media tiles|
US10972731B2|2015-11-10|2021-04-06|Interdigital Madison Patent Holdings, Sas|Systems and methods for coding in super-block based video coding framework|
US20170155905A1|2015-11-30|2017-06-01|Intel Corporation|Efficient intra video/image coding using wavelets and variable size transform coding|
CN108781299A|2015-12-31|2018-11-09|联发科技股份有限公司|Method and apparatus for video and the prediction binary tree structure of image coding and decoding|
US10212444B2|2016-01-15|2019-02-19|Qualcomm Incorporated|Multi-type-tree framework for video coding|
US10445862B1|2016-01-25|2019-10-15|National Technology & Engineering Solutions Of Sandia, Llc|Efficient track-before detect algorithm with minimal prior knowledge|
US20210195241A1|2016-02-01|2021-06-24|Lg Electronics Inc.|Method and device for performing transform using row-column transforms|
KR20170096088A|2016-02-15|2017-08-23|삼성전자주식회사|Image processing apparatus, image processing method thereof and recording medium|
US11223852B2|2016-03-21|2022-01-11|Qualcomm Incorporated|Coding video data using a two-level multi-type-tree framework|
US20170280139A1|2016-03-22|2017-09-28|Qualcomm Incorporated|Apparatus and methods for adaptive calculation of quantization parameters in display stream compression|
WO2017178782A1|2016-04-15|2017-10-19|Magic Pony Technology Limited|Motion compensation using temporal picture interpolation|
EP3298786A1|2016-04-15|2018-03-28|Magic Pony Technology Limited|In-loop post filtering for video encoding and decoding|
US10574999B2|2016-05-05|2020-02-25|Intel Corporation|Method and system of video coding with a multi-pass prediction mode decision pipeline|
US11019335B2|2016-05-10|2021-05-25|Samsung Electronics Co., Ltd.|Method for encoding/decoding image and device therefor|
KR20190018624A|2016-05-13|2019-02-25|브이아이디 스케일, 인크.|Generalized Multi-Hypothesis Prediction System and Method for Video Coding|
CA3025340A1|2016-05-25|2017-11-30|Arris Enterprises Llc|General block partitioning method|
US10284845B2|2016-05-25|2019-05-07|Arris Enterprises Llc|JVET quadtree plus binary treestructure with multiple asymmetrical partitioning|
ES2877362T3|2016-05-25|2021-11-16|Arris Entpr Llc|Binary, ternary, and quaternary tree partitioning for JVET encoding of video data|
MX2018014492A|2016-05-25|2019-08-12|Arris Entpr Llc|Jvet coding block structure with asymmetrical partitioning.|
WO2017205700A1|2016-05-25|2017-11-30|Arris Enterprises Llc|Binary, ternary and quad tree partitioning for jvet coding of video data|
US10743210B2|2016-06-01|2020-08-11|Intel Corporation|Using uplink buffer status to improve video stream adaptation control|
US10657674B2|2016-06-17|2020-05-19|Immersive Robotics Pty Ltd.|Image compression method and apparatus|
US10169362B2|2016-07-07|2019-01-01|Cross Commerce Media, Inc.|High-density compression method and computing system|
EP3381190B1|2016-08-04|2021-06-02|SZ DJI Technology Co., Ltd.|Parallel video encoding|
WO2018023554A1|2016-08-04|2018-02-08|SZ DJI Technology Co., Ltd.|System and methods for bit rate control|
WO2018030746A1|2016-08-08|2018-02-15|엘지전자|Method for processing image and apparatus therefor|
US10609423B2|2016-09-07|2020-03-31|Qualcomm Incorporated|Tree-type coding for video coding|
CN107872669B|2016-09-27|2019-05-24|腾讯科技(深圳)有限公司|Video code rate treating method and apparatus|
US10567775B2|2016-10-01|2020-02-18|Intel Corporation|Method and system of hardware accelerated video coding with per-frame parameter control|
KR20190053256A|2016-10-04|2019-05-17|김기백|Image data encoding / decoding method and apparatus|
EP3306927A1|2016-10-05|2018-04-11|Thomson Licensing|Encoding and decoding methods and corresponding devices|
EP3306938A1|2016-10-05|2018-04-11|Thomson Licensing|Method and apparatus for binary-tree split mode coding|
KR20180039323A|2016-10-10|2018-04-18|디지털인사이트 주식회사|Video coding method and apparatus using combination of various block partitioning structure|
US10187178B2|2016-10-11|2019-01-22|Microsoft Technology Licensing, Llc|Dynamically partitioning media streams|
US10779004B2|2016-10-12|2020-09-15|Mediatek Inc.|Methods and apparatuses of constrained multi-type-tree block partition for video coding|
KR20180043151A|2016-10-19|2018-04-27|에스케이텔레콤 주식회사|Apparatus and Method for Video Encoding or Decoding|
WO2018074825A1|2016-10-19|2018-04-26|에스케이텔레콤 주식회사|Device and method for encoding or decoding image|
CN106454355B|2016-10-24|2019-03-01|西南科技大学|A kind of method for video coding and device|
EP3528498A4|2016-11-21|2019-08-21|Panasonic Intellectual Property Corporation of America|Coding device, decoding device, coding method, and decoding method|
WO2018097590A1|2016-11-22|2018-05-31|한국전자통신연구원|Image encoding/decoding method and device, and recording medium having bitstream stored thereon|
US10694202B2|2016-12-01|2020-06-23|Qualcomm Incorporated|Indication of bilateral filter usage in video coding|
US10631012B2|2016-12-02|2020-04-21|Centurylink Intellectual Property Llc|Method and system for implementing detection and visual enhancement of video encoding artifacts|
US10397590B2|2016-12-05|2019-08-27|Nice-Systems Ltd.|System and method for enabling seek in a video recording|
WO2018110600A1|2016-12-16|2018-06-21|シャープ株式会社|Image decoding device and image coding device|
CN113784122A|2016-12-23|2021-12-10|华为技术有限公司|Intra-frame prediction device for expanding preset directional intra-frame prediction mode set|
CN108781287A|2016-12-26|2018-11-09|日本电气株式会社|Method for video coding, video encoding/decoding method, video encoder, video decoding apparatus and program|
CN108781286A|2016-12-26|2018-11-09|日本电气株式会社|Method for video coding, video encoding/decoding method, video encoder, video decoding apparatus and program|
TWI617181B|2017-01-04|2018-03-01|晨星半導體股份有限公司|Scheduling method for high efficiency video coding apparatus|
CN110692250A|2017-01-05|2020-01-14|弗劳恩霍夫应用研究促进协会|Block-based predictive encoding and decoding of images|
US10848788B2|2017-01-06|2020-11-24|Qualcomm Incorporated|Multi-type-tree framework for video coding|
EP3349454A1|2017-01-11|2018-07-18|Thomson Licensing|Method and device for coding a block of video data, method and device for decoding a block of video data|
KR20180088188A|2017-01-26|2018-08-03|삼성전자주식회사|Method and apparatus for adaptive image transformation|
WO2018143687A1|2017-02-01|2018-08-09|엘지전자|Method and apparatus for performing transformation by using row-column transform|
MX2019002383A|2017-02-06|2019-07-08|Huawei Tech Co Ltd|Coding and decoding method and device.|
EP3579940A4|2017-02-08|2020-11-18|Immersive Robotics Pty Ltd|Displaying content to users in a multiplayer venue|
US10362332B2|2017-03-14|2019-07-23|Google Llc|Multi-level compound prediction|
US10820017B2|2017-03-15|2020-10-27|Mediatek Inc.|Method and apparatus of video coding|
US10904531B2|2017-03-23|2021-01-26|Qualcomm Incorporated|Adaptive parameters for coding of 360-degree video|
CN108665410B|2017-03-31|2021-11-26|杭州海康威视数字技术股份有限公司|Image super-resolution reconstruction method, device and system|
US11010338B2|2017-04-06|2021-05-18|Shanghai Cambricon Information Technology Co., Ltd|Data screening device and method|
CN110945849B|2017-04-21|2021-06-08|泽尼马克斯媒体公司|System and method for encoder hint based rendering and precoding load estimation|
CA3082771A1|2017-04-21|2018-10-25|Zenimax Media Inc.|Systems and methods for encoder-guided adaptive-quality rendering|
WO2018195431A1|2017-04-21|2018-10-25|Zenimax Media Inc.|Systems and methods for deferred post-processes in video encoding|
US10567788B2|2017-04-21|2020-02-18|Zenimax Media Inc.|Systems and methods for game-generated motion vectors|
DE112018002562B3|2017-04-21|2022-01-05|Zenimax Media Inc.|Systems and methods for player input motion compensation in a client-server video game by caching repetitive motion vectors|
US10979728B2|2017-04-24|2021-04-13|Intel Corporation|Intelligent video frame grouping based on predicted performance|
CN108810556B|2017-04-28|2021-12-24|炬芯科技股份有限公司|Method, device and chip for compressing reference frame|
US10638127B2|2017-05-15|2020-04-28|Qualcomm Incorporated|Adaptive anchor frame and quantization parameter decision in video coding|
US20200145696A1|2017-06-05|2020-05-07|Immersive Robotics Pty Ltd|Digital Content Stream Compression|
KR102315524B1|2017-06-16|2021-10-21|한화테크윈 주식회사|A method for adjusting bitrate of the image and image capture apparatus|
CN107222751B|2017-06-23|2019-05-10|宁波大学科学技术学院|3D-HEVC deep video information concealing method based on multi-view point video feature|
RU2020102878A3|2017-06-26|2021-10-13|
CN112601084A|2017-06-28|2021-04-02|华为技术有限公司|Image data encoding and decoding methods and devices|
US10764582B2|2017-06-29|2020-09-01|Qualcomm Incorporated|Reducing seam artifacts in 360-degree video|
CN107360419B|2017-07-18|2019-09-24|成都图必优科技有限公司|A kind of movement forward sight video interprediction encoding method based on perspective model|
US11146608B2|2017-07-20|2021-10-12|Disney Enterprises, Inc.|Frame-accurate video seeking via web browsers|
CN107483949A|2017-07-26|2017-12-15|千目聚云数码科技(上海)有限公司|Increase the method and system of SVAC SVC practicality|
CN107295334B|2017-08-15|2019-12-03|电子科技大学|Adaptive reference picture chooses method|
CN107483960B|2017-09-15|2020-06-02|信阳师范学院|Motion compensation frame rate up-conversion method based on spatial prediction|
US10645408B2|2017-09-17|2020-05-05|Google Llc|Dual deblocking filter thresholds|
EP3457695A1|2017-09-18|2019-03-20|Thomson Licensing|Method and apparatus for motion vector predictor adaptation for omnidirectional video|
US10609384B2|2017-09-21|2020-03-31|Futurewei Technologies, Inc.|Restriction on sub-block size derivation for affine inter prediction|
US10623744B2|2017-10-04|2020-04-14|Apple Inc.|Scene based rate control for video compression and video streaming|
US11153604B2|2017-11-21|2021-10-19|Immersive Robotics Pty Ltd|Image compression for digital reality|
CN110832856A|2017-11-30|2020-02-21|深圳市大疆创新科技有限公司|System and method for reducing video coding fluctuations|
CN111164980A|2017-11-30|2020-05-15|深圳市大疆创新科技有限公司|System and method for controlling video encoding within image frames|
WO2019104635A1|2017-11-30|2019-06-06|SZ DJI Technology Co., Ltd.|System and method for controlling video coding at frame level|
US11140207B2|2017-12-21|2021-10-05|Google Llc|Network impairment simulation framework for verification of real time interactive media streaming systems|
KR101975404B1|2017-12-27|2019-08-28|세종대학교산학협력단|Apparatus and method for generating procedural content|
TWI680675B|2017-12-29|2019-12-21|聯發科技股份有限公司|Image processing device and associated image processing method|
EP3744102A1|2018-01-24|2020-12-02|Vid Scale, Inc.|Generalized bi-prediction for video coding with reduced coding complexity|
US11095876B2|2018-01-26|2021-08-17|Samsung Electronics Co., Ltd.|Image processing device|
US11004178B2|2018-03-01|2021-05-11|Nvidia Corporation|Enhancing high-resolution images with data from low-resolution images|
US10432962B1|2018-03-13|2019-10-01|Pixelworks, Inc.|Accuracy and local smoothness of motion vector fields using motion-model fitting|
CN110309328A|2018-03-14|2019-10-08|深圳云天励飞技术有限公司|Date storage method, device, electronic equipment and storage medium|
CN111417441A|2018-03-22|2020-07-14|谷歌有限责任公司|Method and system for rendering and encoding content of an online interactive gaming session|
CN108449599B|2018-03-23|2021-05-18|安徽大学|Video coding and decoding method based on surface transmission transformation|
US11077364B2|2018-04-02|2021-08-03|Google Llc|Resolution-based scaling of real-time interactive graphics|
WO2019194953A1|2018-04-02|2019-10-10|Google Llc|Methods, devices, and systems for interactive cloud gaming|
US11110348B2|2018-04-10|2021-09-07|Google Llc|Memory management in gaming rendering|
US10491897B2|2018-04-13|2019-11-26|Google Llc|Spatially adaptive quantization-aware deblocking filter|
US10798382B2|2018-04-26|2020-10-06|Tencent America LLC|Sub-block transform|
CN110620929A|2018-06-19|2019-12-27|北京字节跳动网络技术有限公司|Selected motion vector difference accuracy without motion vector prediction truncation|
US10602147B2|2018-07-10|2020-03-24|Samsung Display Co., Ltd.|Efficient entropy coding group grouping methodology for transform mode|
US11039315B2|2018-08-01|2021-06-15|At&T Intellectual Property I, L.P.|On-demand super slice instantiation and orchestration|
US10863179B1|2018-09-05|2020-12-08|Amazon Technologies, Inc.|Overlapped rate control for high-quality segmented video encoding|
JP2022500909A|2018-09-19|2022-01-04|北京字節跳動網絡技術有限公司Beijing Bytedance Network Technology Co., Ltd.|Use of syntax for affine mode with adaptive motion vector resolution|
US11218694B2|2018-09-24|2022-01-04|Qualcomm Incorporated|Adaptive multiple transform coding|
US10893281B2|2018-10-12|2021-01-12|International Business Machines Corporation|Compression of a video stream having frames with relatively heightened quality parameters on blocks on an identified point of interest |
US10764589B2|2018-10-18|2020-09-01|Trisys Co., Ltd.|Method and module for processing image data|
CN109640082B|2018-10-26|2021-02-12|浙江鼎越电子有限公司|Audio and video multimedia data processing method and equipment thereof|
US20200137421A1|2018-10-29|2020-04-30|Google Llc|Geometric transforms for image compression|
US10939102B2|2018-11-01|2021-03-02|Mediatek Inc.|Post processing apparatus with super-resolution filter and loop restoration filter in block-level pipeline and associated post processing method|
US10666985B1|2018-11-18|2020-05-26|Sony Corporation|Sub-block based entropy coding for image coding|
US10855988B2|2018-12-19|2020-12-01|Qualcomm Incorporated|Adaptive prediction structures|
US10645386B1|2019-01-03|2020-05-05|Sony Corporation|Embedded codec circuitry for multiple reconstruction points based quantization|
US10778972B1|2019-02-27|2020-09-15|Google Llc|Adaptive filter intra prediction modes in image/video compression|
US10924625B2|2019-03-20|2021-02-16|Xerox Corporation|Dynamic compression acceleration using real-time image data entropy analysis|
CN111901593A|2019-05-04|2020-11-06|华为技术有限公司|Image dividing method, device and equipment|
WO2020242738A1|2019-05-26|2020-12-03|Alibaba Group Holding Limited|Ai-assisted programmable hardware video codec|
US10949982B1|2019-06-21|2021-03-16|Amazon Technologies, Inc.|Moving object recognition, speed estimation, and tagging|
WO2020263499A1|2019-06-24|2020-12-30|Alibaba Group Holding Limited|Adaptive resolution change in video processing|
WO2021026363A1|2019-08-06|2021-02-11|Op Solutions, Llc|Implicit signaling of adaptive resolution management based on frame type|
US20210176467A1|2019-12-06|2021-06-10|Ati Technologies Ulc|Video encode pre-analysis bit budgeting based on context and features|
US20220005200A1|2020-07-02|2022-01-06|Sony Corporation|Machine Learning based Image Segmentation Training with Contour Accuracy Evaluation|
法律状态:
2018-11-21| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]|
2020-02-11| B15I| Others concerning applications: loss of priority|Free format text: PERDA DA PRIORIDADE US 61/758,314 DE 30/01/2013 REIVINDICADA NO PCT/US2013/077702 POR NAO CUMPRIMENTO DA EXIGENCIA PUBLICADA NA RPI 2547 DE 29/10/2019 PARA APRESENTACAO DE DOCUMENTO DE CESSAO CORRETO |
2020-08-18| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]|
2020-08-18| B15K| Others concerning applications: alteration of classification|Free format text: AS CLASSIFICACOES ANTERIORES ERAM: H04N 19/91 , H04N 19/50 Ipc: H04N 19/117 (2014.01), H04N 19/119 (2014.01), H04N |
2020-12-08| B11B| Dismissal acc. art. 36, par 1 of ipl - no reply within 90 days to fullfil the necessary requirements|
2021-10-13| B350| Update of information on the portal [chapter 15.35 patent gazette]|
优先权:
申请号 | 申请日 | 专利标题
US201361758314P| true| 2013-01-30|2013-01-30|
PCT/US2013/077702|WO2014120369A1|2013-01-30|2013-12-24|Content adaptive partitioning for prediction and coding for next generation video|
[返回顶部]