Block-Based Hybrid Video Coding Using Motion-Compensated Long-Term Memory Prediction
Copyright ©1997 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
NOTE: At the time of publication, the author Jane Zhang was not yet affiliated with Cal Poly.
Our new approach extends the spatial displacement vector utilized in block-based hybrid video coding by a variable time delay permitting the use of more frames than the previously decoded one for motion compensation. The longterm memory covers the decoded frames of some seconds at the encoder's as well as the decoder's side. This scheme is well suited in cases of repetition in the sequence, e.g. a head is rotating back into its old position, or if the camera is shaking. However, transmission of the variable time delay requires additional bit-rate which may be prohibitive when the size of the long-term memory increases. Therefore, we control the bit-rate of the motion information by employing rate-constrained motion estimation. Simulation results are obtained by integrating long-term memory prediction into an H.263 codec. PSNR improvements up to 2 dB for the Foreman sequence and 1.5 dB for the Mother-Daughter sequence are demonstrated in comparison to the TMN-2.0 H.263 coder.
Thomas Wiegand, Xiaozheng Zhang, and Bernd Girod. "Block-Based Hybrid Video Coding Using Motion-Compensated Long-Term Memory Prediction" Picture Coding Symposium Proceedings: Berlin (1997).
Available at: http://works.bepress.com/jzhang/11