[matroska-devel] Re: How UCI and Matroska could interact
steve.lhomme at free.fr
Thu Jan 23 12:28:51 CET 2003
En réponse à Toby Hudon <gldm at mail.com>:
> The problem we were discussing was issues of randomly seeking frames,
> when that frame might be dependant on another frame's data. For me, if I
> have the previous keyframe, I have all the data, because I only store
> data in keyframes. The other frames are either placeholders to know when
> it's time to serve up the next frame worth of data, or don't exist
> assuming a modern API like UCI that can deal with this concept of just
> asking for a frame at the right time with no data.
It is close to reality. The problem is that from the container point of view we
don't know if the frame to be displayed is the 2nd or 31st. (that's what we get
from using timecodes for references instead of a number) and I assume your codec
has to know that number to display the correct frame, especially since you
already mentioned that your codec doesn't deal with time (allowing good variable
frame rate use). So when you have the 31st one, you need to know that it's not
the second but the 31st one.
For this case, a simple reference number (31) could be used in addition to the
backward timecode reference. But it's not really general as future codec like
yours (that don't want to be tied to the old VfW API) might store different
informations in each of the 31 frames, and even have a key frame repeated but
not with a fixed rate (not every 32 or 64 frames but whenever the codec decides
to like on scene changes). Of course such a codec could only be stored in a
modern container *grin*... A frame that needs information of a key frame 1012
frames before and also from the last 4 frames is a plausible case in the future
with advanced codecs, all this at a variable frame rate :)
> However, for things to run smoothly I need to be decoding the next
> block's worth of data while the current block is decoded and playing.
> Otherwise at every keyframe/block there'll be a huge pause for decode.
> So it needs at least a double buffer. This means I need TWO keyframes
> all the time.
Well in BeOS this is handled though a delay that every codec has to define.
IIRC, such a thing already exists in UCI.
The frame precision seeking is more important at the edition level than at the
> The interface will likely test for small seeks that are within the
> current block and can deliver those easily. However, for seeks outside
> the current block, there's a more complicated proceedure. The interface
> (this is still my code, not UCI's) needs to call the core and flush() to
> clear out the two existing blocks. Then it needs to call
> init_2_buffer(frame A, frame B) with 2 keyframes of data. For stupid
> codecs like VFW, this may involve serving some NULL frames with blank
> data (my interface will generate these as needed) to get the second
> keyframe. Smarter APIs will probably have a way to just directly request
> a future frame like for a b-frame. This is basicly the same proceedure
> as at the start of a file. Then as the blocks are decoded,
> decompress(frame F) can be called as normal by the interface. How the
> decompress function is triggered, either by dummy frames or an API
> instruction to display the next frame is unimportant, my interface for
> each API will handle these details. However, the dummy frame method may
> waste some bandwidth storing the extra frames in some containers, so the
> call for next frame method is preferred.
Hummm... Well, in matroska you can requests frames of a timecode but not
directly by a reference count (like granule pos). This is done to be able to
decode a P frame even though a frame is missing between this P frame and the
reference key frame. So in your case you would have to deal with that too...
That's why it's good/better/best to store the references at the container level.
Since that's the level that will deal with errors and missing data.
This way in matroska you can be *sure* that when you read a frame you have all
the references, or not. The codec doesn't need to care about it. It should just
allow frame gaps in the stream, ie be given what frames were/are the reference
ones for a given frame.
> From what Pamel said I think when you seek with VFW on a normal mpeg
> file, VFW will give you the previous keyframe and all the frames
> inbetween so you can compute the p-frame (assuming you seek to a
> p-frame). I'm not sure if this is true. Can anyone clarify it for me?
I'm not sure neither, but I think it's how it works. And how it shouldn't be
More information about the Matroska-devel