[matroska-devel] Re: Common Opensource codec API
christian at matroska.org
Sat Jun 28 16:35:45 CEST 2003
Ronald Bultje wrote:
>On Fri, 2003-06-27 at 01:47, Guenter Bartsch wrote:
>>so, basically i think it would be interesting to see if it is possible
>>to agree on a common standard for free multimedia plugins, especially
I 'd have high hopes on this also, but i am not convinced that it is
possible to find a common denominator easily.
>>let's see, what he says :) one problem with gstreamer imho is their many
>>g'isms, not sure what the current state there is. i think a common api
>>should not be dependant on stuff like glib or gobject (though i love
>>glib i don't think it's a good idea to force everyone into using it).
>As said above, I tend to agree that it's not a good dependency for a
>core thing such as a codec API.
I fully agree here BBB. This codec API can maybe base on a small lib,
but thats it. Alex Stewarts' libuci was more an example lib how UCI
could be used from an app, while on the codec side this wasnt required
at all IIRC:
>What I'd love to see is the codec API not being an actual lib, but just
>a protocol, like X. Each application can then write it's own
>implementation code for this, and the codec API can, if wanted, provide
>a header file describing all structs (if any) etc. used in the protocol.
>Tying ourselves to one and the same lib likely won't work, even now
>we're already reproducing so much of the same code... Do people agree
LOL. Maybe OT here, but this reminds me that somebody was comparing
EBML, the backbone of matroska and a kind of binary XML, with a printer
protocol for LAN printers once ... :) ...
>Ok, now back to the actual question, what would it look like. A protocol
>describes structs or data fields, documents what properties mean what,
>etc. If we want a common demuxer API, I'd very much prefer if people
>would use the parent-child object stuffing I described in the beginning
>of this email. Each bytestream can export codec streams, which can be
>anything, including but not limited to audio, video, subtitle, teletext,
>private data or whatever. Types define the actual type of the codec
>stream, and by means of a child class, all specifics are filled in for
>the particular stream.
>For audio, this is rate, channels, maybe (in case of raw audio) the
>bitsize (8, 16) or container type (uint8_t, int16_t, maybe expansions of
>these for future reference), this can also be float data, btw! We just
>need to document what property name means what, and then simply make a
>name/value pair system that describes all these.
>For video, this would be width, height, ... For subtitles, this would be
>nothing at first, I guess.
>The parent object describes things like timestamps, duration, the actual
>Codecs: much easier. One input, one output. Properties are the same as
>the properties for the codec stream in the demuxer in case of the
>encoded data. For the decoded data, the same applies, but we'll probably
>want to provide some more properties for 'raw' data. Anyway, this just
>needs specifying. The mimetypes document above is what we currently use,
>other comments are welcome, of course.
>Concerns from my side, for as far as my experience goes with codecs in
>GStreamer: do we want to separate between codec and bytestream in case
>where these are (almost) the same, such as ogg/vorbis, mp3, etc? If so,
>what are the exact tasks of the parser (e.g. defining properties
>required by the codec, maybe metadata) and the codec (decoding), and
>what do we do if these interfere (e.g. I'm being told that not ogg, but
>vorbis contains the metadata of a stream!).
BBB, can you invest a couple of hours and come up with a small doc
describing such a protocol, so it could be discussed on the lists that
have been involved and were expressing interest in such a solution ?
>And just to state clearly: our final goal is to propose a standardized
>API or interface of how codecs, muxer/demuxer libraries etc. should look
>to be usable by our applications. It is not to define how a bytestream
>should look. ;). Just so I (and you) know what we're actually talking
Alex had the following plans for UCI :
UCI : codec API
UFI : filter API
UMI : muxing API, so that various containers could be used from
All those required a different level of complexity, with UCI being
lowest. UCI itself could not handle streams with more than one
substreams in it ( like DV type 1 ), but this was supported by UMI then.
Very unfortunately not even UCI was getting anywhere close to completion
and we had no sign of Life from Alex since a couple of months now, but
maybe somebody find the time to look at what he did and documented so
far ? http://uci.sf.net . BBB, is Alex' approach similar to a protocol
approach as suggested by you ?
>Enough typing for now, I'd better get back to actual work here. :-o.
>Comments are very much appreciated. ;).
>Thanks for reading, Ronald
Great ideas BBB, of course i only understand half of it ;-) ..... any
other devs care to comment more in detail ? Could such a 'protocol'
solution be something you would want to support ?
More information about the Matroska-devel