Description
There is a long standing issue with large data channel messages in the existing API of RTCDataChannel. If I want to send a single message containing 1 GiB of data (for example a large file), I have to have this gigantic message in memory at the time of sending. If I receive a 1 GiB message, this message is slowly being reassembled until it's in memory and then handed out to the application. That creates backpressure and the like.
My idea is to resolve this by extending the RTCDataChannel
API in the following way:
Sending
Add a .createWritableStream
method which returns a WritableStream
instance. With the associated WritableStreamWriter
instance, one can add chunks by calling .write
on the writer. Once the writer is closed, the message is considered complete.
Receiving
If .binaryType
is set to stream
, the event raised for onmessage
contains a ReadableStream
instance that is being created when the first chunk is being received. Once the whole message has been read, the reader will return EOF on a .read
call (as specified by the streams API).
Edit: What should happen when a string is being received will need to be discussed.
In the meeting, I think there was a slight confusion about what streaming API I meant. Basically, I propose two streaming APIs that use WHATWG streams:
- WHATWG streams for the QuicStream API is being discussed here: https://github.com/w3c/webrtc-quic/issues/2
- WHATWG streams for the existing RTCDataChannel API. This is what this issue is for. And with the above description it should be clear that for
RTCDataChannel
there would have to be a stream for each message as data channels transfer datagrams and not a sequence of bytes. This is the point I wanted to make during the meeting. I hope this clarifies it a bit. :)