Closed Bug 822956 Opened 12 years ago Closed 12 years ago

Paused audio streams captured from getUserMedia buffer audio while paused

Categories

(Core :: WebRTC: Audio/Video, defect)

defect
Not set
normal

Tracking

()

VERIFIED FIXED
mozilla20

People

(Reporter: jesup, Assigned: jesup)

References

Details

(Whiteboard: [getUserMedia][blocking-gum+])

Attachments

(9 files, 9 obsolete files)

2.50 KB, text/html
Details
9.18 KB, patch
jesup
: review+
Details | Diff | Splinter Review
2.25 KB, text/html
Details
15.10 KB, patch
robert
: review+
Details | Diff | Splinter Review
1.08 KB, patch
jesup
: review+
Details | Diff | Splinter Review
6.78 KB, patch
mreavy
: review+
jesup
: checkin+
Details | Diff | Splinter Review
4.65 KB, patch
Details | Diff | Splinter Review
2.24 KB, patch
jesup
: review-
Details | Diff | Splinter Review
1.15 KB, text/plain
ekr
: review+
Details
Paused realtime streams should throw away all audio while the output is paused, and resume with current output when the destination starts to play again.  Note: the data queued in the graph at the time of pausing should also be dropped, and I'm not sure the method here does that.

Spun off from bug 806426  (comment 7 there:)

 Robert O'Callahan (:roc) (Mozilla Corporation) 2012-11-01 02:15:50 PDT

A relatively easy way to solve this is to never allow the SourceMediaStream to block. Wrap it in a TrackUnionStream with FLAG_BLOCK_OUTPUT but not FLAG_BLOCK_INPUT, and give that stream out instead. (Sorry the documentation for these flags in MediaStreamGraph.h is wrong!)

The downside of that approach is that it'll make it harder to improve the latency for WebRTC (though it won't make latency worse than it is now). But I still think it's the right thing to do.
Attachment #693756 - Flags: review?(roc)
Comment on attachment 693756 [details] [diff] [review]
convert to TrackUnionStreams for getUserMedia (and add hint support)

Review of attachment 693756 [details] [diff] [review]:
-----------------------------------------------------------------

r+ with that fixed.

::: dom/media/MediaManager.cpp
@@ +278,5 @@
> +    // connect the source stream to the track union stream to avoid us blocking
> +    nsRefPtr<MediaInputPort> port = trackunion->GetStream()->AsProcessedStream()->
> +      AllocateInputPort(stream->GetStream()->AsSourceStream(),
> +                        MediaInputPort::FLAG_BLOCK_INPUT |
> +                        MediaInputPort::FLAG_BLOCK_OUTPUT);

You don't want the BLOCK_INPUT flag.
Attachment #693756 - Flags: review?(roc) → review+
(In reply to Robert O'Callahan (:roc) (Mozilla Corporation) from comment #2)

> > +    // connect the source stream to the track union stream to avoid us blocking
> > +    nsRefPtr<MediaInputPort> port = trackunion->GetStream()->AsProcessedStream()->
> > +      AllocateInputPort(stream->GetStream()->AsSourceStream(),
> > +                        MediaInputPort::FLAG_BLOCK_INPUT |
> > +                        MediaInputPort::FLAG_BLOCK_OUTPUT);
> 
> You don't want the BLOCK_INPUT flag.

Right; I made the mod on the other machine.

I found a worse problem in testing: Calling Finish() on the TrackUnion stream off of MainThread is verboten, while it's fine on a SourceStream.

Alternatives: send the TrackUnion stream to mainthread to call Finish() (ironic since I'm calling it *from* the MediaStreamGraph thread that processes Finish messages).  Modify Finish() to allow being called off MainThread (at least from MediaStreamGraph thread). Don't call Finish() (probably not an option).  Call it somewhere else, perhaps from the code that removes listeners, if it's on MainThread.  Modify TrackUnion to inherit the Finished state from its input(s) if they're all Finished.
use SetAutofinish() on TrackUnionStream; Finish() isn't allowed off-main-thread for TrackUnion streams
Attachment #693756 - Attachment is obsolete: true
Comment on attachment 693767 [details] [diff] [review]
convert to TrackUnionStreams for getUserMedia (and add hint support)

Turns out the last option was already in there. :-)

Pretty obvious update - use SetAutofinish(), and we no longer need the port in the runnable (which was calling Finish()).
Attachment #693767 - Flags: review?(roc)
https://hg.mozilla.org/mozilla-central/rev/3dead2094684
Status: NEW → RESOLVED
Closed: 12 years ago
Resolution: --- → FIXED
Reopened because
1) we get non-framesize grouping of samples (eg 177)
2) Samples get converted to floats

https://hg.mozilla.org/integration/mozilla-inbound/rev/95bb46813e58
Status: RESOLVED → REOPENED
Resolution: FIXED → ---
(In reply to Randell Jesup [:jesup] from comment #8)
> Reopened because
> 1) we get non-framesize grouping of samples (eg 177)
> 2) Samples get converted to floats
> 
> https://hg.mozilla.org/integration/mozilla-inbound/rev/95bb46813e58

Ultimately we'll need to be able to handle getting data in arbitrary size-chunks.

In this case, we seem to be getting synthetic silence. I don't know why that would happen. If the input stream isn't producing anything, the output stream should be blocked. There's probably a bug here.

mBufferFormat is irrelevant. When mBuffer is null, we know the data is simply silence of duration mDuration.
Comment on attachment 696141 [details] [diff] [review]
Handle audio chunks that aren't 10ms in duration

Review of attachment 696141 [details] [diff] [review]:
-----------------------------------------------------------------

::: media/webrtc/signaling/src/mediapipeline/MediaPipeline.cpp
@@ +696,5 @@
>      }
>    }
>  
> +  // This code sends audio in exact 10ms chunks
> +  chunk_remaining = chunk.mDuration;

chunk_remaining and mDuration appear to be of different types:

chunk_remaining is uint32_t and mDuration is TrackTicks (int64_t).

This isn't a safe conversion for a number of reasons.

@@ +700,5 @@
> +  chunk_remaining = chunk.mDuration;
> +  while (chunk_remaining && (buffer_remaining = (samplenum_10ms_ - buffer_current_)) != 0) {
> +    if (chunk_remaining < buffer_remaining) {
> +      // Underfull, not enough to send, add samples to buffer, then leave function
> +      memcpy(&samples_10ms_buffer_[buffer_current_], samples.get(), chunk_remaining);

This memcpy seems to be copying chunk_remaining bytes when it should be copying chunk_remaining * sizeof(int16_t) bytes

@@ +705,5 @@
> +      buffer_current_ += chunk_remaining;
> +      return;
> +    } else {
> +      // Either overfull or exactly equal; so send what we have, save any remainder for next loop
> +      memcpy(&samples_10ms_buffer_[buffer_current_], samples.get(), buffer_remaining);

This doesn't seem to me to advance samples every time through the loop.

Say I pass in a buffer of 50 ms, aren't you just passing out the same leading 10ms?

@@ +712,5 @@
> +      // Reset buffer_current_ since we just sent a full buffer of data and need to start a new buffer
> +      buffer_current_ = 0;
> +      chunk_remaining -= buffer_remaining;
> +    }
> +  }

This loop seems pretty hard to follow because you are copying 10ms at
at a time rather than just trying to be multiples of 10ms. I would suggest
something that (a) didn't loop and (b) didn't try to mix the cases as
much.

chunk_remaining = chunk.mDuration;
int16_t *samples_tmp = samples.get();

if (buffer_current_) {
  tocpy = std::min(chunk_remaining, samplenum10ms_ - buffer_current_);
  memcpy(&samples_10ms_buffer_[buffer_current_], samples_tmp, tocpy * sizeof(int16_t));
  buffer_current_ += tocpy;
  samples_tmp += tocpy;
  chunk_remaining = chunk.mDuration - tocpy;

  if (buffer_current_ == samplenum10ms_) {
     conduit->SendAudioFrame(...);
  } else {
     return;
  }

  // Now send out as much as we can.
  tocpy = (chunk_remaining / samplenum_10ms_) * samplenum_10ms_;
  conduit->SendAudioFrame(samples_tmp, tocpy...);
  samples_tmp += tocpy;
  chunk_remaining = chunk.mDuration - tocpy;
}

if (chunk_remaining) {
   memcpy(&samples_10ms_buffer_[0], samples_tmp, chunk_remaining * sizeof(int16_t));
   buffer_current_ = chunk_remaining;
}

::: media/webrtc/signaling/src/mediapipeline/MediaPipeline.h
@@ +279,5 @@
>      volatile bool active_;
> +
> +    nsAutoArrayPtr<int16_t> samples_10ms_buffer_;
> +    uint32_t buffer_current_;
> +    uint32_t samplenum_10ms_;

Can you add comments to describe what these do?
Attachment #696141 - Flags: review?(ekr) → review-
Attachment #696141 - Attachment is obsolete: true
In the process of verifying that my patch worked, I noticed there was significant static in the audio as well as a varying, noticeable delay (varied between 1 and 3 seconds) with a simple, local PeerConnection Audio test running on a Mac.  I was running with the TrackUnion patch and my patch.  I'll upload the test case I was using so others can try it.  I also verified that the audio coming into my code already had static and that the contents of the audio buffers were unchanged by my code.

In short, I'm concerned that we may be wallpapering over a fundamental problem in MediaStreamGraph or related code, at least on Mac.  The problem/crash my patch is working around (sending non-10ms audio chunks to the GIPS audio code) did not happen for me on Linux.  I have not tried listening to the audio on a Linux or Windows box to see if it has the same problems I'm hearing on Mac (or if it's clean).  That is probably a good next step here.
Flags: in-testsuite?
I hear a lot of noise too, fwiw.
un-bitrot trackunion patch, carry-forward r=roc
Attachment #693767 - Attachment is obsolete: true
Attachment #696661 - Flags: review+
Sorry, forgot to save a buffer after resolving the MediaManager.h conflict
Attachment #696661 - Attachment is obsolete: true
Attachment #696662 - Flags: review+
I get 161-sample chunks on Windows as well.
I think 161 is just an off-by-one error for 160 (10ms of audio).
And that's just because we track time in units of 2^-20 and 10ms isn't exactly representable in those units, so we can't always hit exactly 160 samples after converting to the 16000 sample rate.
When we're not going through PeerConnection, things work fine and there's no static.
I think the problem is that sometimes we ask the MediaPipelineReceiveAudio for 161 samples. Then we hit this code:

  // Number of 10 ms samples we need
  //int num_samples = ceil(time_s / .01f);

  // Doesn't matter what was asked for, always give 160 samples per 10 ms.
  int num_samples = 1;

  MOZ_MTLOG(PR_LOG_DEBUG, "Asking for " << num_samples << "sample from Audio Conduit");

So, we underrun :-(.

I'm not really sure how this is supposed to work. It looks like this code is assuming NotifyPull runs *exactly* every 10ms?
I think (and I'll check) the GIPS code will only provide multiples (or single instances of) 10ms of audio samples.  

Also, that's on the reception side - data coming out of RTP and codec decode being fed into a mediaStream.  The audio quality issue we're hitting was in Transmit code fed from a MediaStream via NotifyQueuedTrackChanges(), where the data in the MediaStream originally came from getUserMedia.

The GetUserMedia input code gets called on a callback from the GIPS capture code every 10ms, and it calls AppendFrames with that.  See content/media/webrtc/MediaEngineWebRTCAudio.cpp in ::Process().

The receive-side issue may need some resolution as well (and from looking at that it might) - but that's a different bug.
OK. So is testcase in comment #22 (gUM only, no PeerConnection) full of static for you? Because it works fine for me.

I wasn't aware of this GIPS 10ms requirement until recently. It's kind of a problem given we have to work with processing periods (for native audio callbacks and Web Audio) that aren't 10ms.
reply to Robert O'Callahan (:roc) (Mozilla Corporation) from comment #25)
> OK. So is testcase in comment #22 (gUM only, no PeerConnection) full of
> static for you? Because it works fine for me.

gUM only is clean for me too on a Mac.  The problem occurs when PeerConnection is invoked Specifically, I hear static on the input to MediaPipelineTransmit::PipelineListener::ProcessAudioChunk, which gets called from NotifyQueuedTrackChanges.  

FYI: We're not planning to pref on PeerConnection at Firefox 20 uplift, but we are planning to pref on gUM at or before uplift -- and we're hoping to pref on PeerConnection during Aurora 20.
(In reply to Maire Reavy [:mreavy] from comment #26)
> Specifically, I hear static on the input to
> MediaPipelineTransmit::PipelineListener::ProcessAudioChunk, which gets
> called from NotifyQueuedTrackChanges.

How did you verify that?

Running your test, I see null AudioChunks (mBuffer == 0) being received by MediaPipelineTransmit::PipelineListener::ProcessAudioChunk only as we transmit the "fake" MediaStream added to pc2. I do not see any such AudioChunks being received as we transmit the actual microphone input.
Again, this is on Windows, so maybe there is a platform-specific transmit-side issue.

(In reply to Robert O'Callahan (:roc) (Mozilla Corporation) from comment #25)
> I wasn't aware of this GIPS 10ms requirement until recently. It's kind of a
> problem given we have to work with processing periods (for native audio
> callbacks and Web Audio) that aren't 10ms.

We may be able to deal with this by having each iteration of the MediaStreamGraph process exactly 10ms (which will require a change in units so we can represented 10ms exactly). If the OS audio callback doesn't request an exact multiple of 10ms we'd basically round it up to the nearest 10ms and store the leftover samples for the next callback. Web Audio currently requires processing 128-sample blocks, which means the Web Audio sample rates would be restricted to multiples of 128*100, which isn't very good, but maybe that requirement isn't real and we can use blocks of 32, say, in which case we could use sample rates of 16KHz, 32KHz and 48KHz.

I hope there aren't any other unknown requirements because things are getting kinda tight :-).
I know that on Linux, at least in classic embedded systems, the buffer sizes for audio tended to be values that were painful for other code, such as 16ms.  The reason for this was using power-of-two sized buffers, combined with sample rates that tended to be things like 8000, 16000 and for fun 44100 and 48000.  None of those end up with 10ms or something that cleanly multiplies to 10ms.  For example, 8000Hz 16bit PCM into a 256 byte buffer = 16ms per buffer.

Since the 10ms thing is a GIPS API requirement, it can be enforced at the GIPS API entrances (like in Maire's patch), and leave the core to work in a cleaner/more-efficient manner.
(In reply to Robert O'Callahan (:roc) (Mozilla Corporation) from comment #27)
> (In reply to Maire Reavy [:mreavy] from comment #26)
> > Specifically, I hear static on the input to
> > MediaPipelineTransmit::PipelineListener::ProcessAudioChunk, which gets
> > called from NotifyQueuedTrackChanges.
> 
> How did you verify that?

I wrote the audio input coming into my code (before I did anything with it) to a file.  I also wrote the output of my code (right before I sent the 10ms audio buffers to GIPS) to a second file.  I compared the two files, and they were identical.  I then loaded the input file into Audacity and listened to it.  I heard the static very clearly.

I then backed out *just* the TrackUnion patch (kept my code in) and re-did the exact same test.  The input file was clean (no static).

Randell's last comment (Comment 29) speaks to where we deal with enforcing the 10ms buffer size that the GIPS code requires.  That is easy to do, and I'm not worried about that, per se (regardless of where we decide to enforce the buffer size).

The big question IMO is: where is the static coming from and how do we get rid of it?

NOTE: All of these comments above apply to the input of ProcessAudioChunk.  This does not speak to the output side of PeerConnection.
(In reply to Maire Reavy [:mreavy] from comment #30)
> I wrote the audio input coming into my code (before I did anything with it)
> to a file.  I also wrote the output of my code (right before I sent the 10ms
> audio buffers to GIPS) to a second file.  I compared the two files, and they
> were identical.  I then loaded the input file into Audacity and listened to
> it.  I heard the static very clearly.

OK. Do you see MediaPipelineTransmit::PipelineListener::ProcessAudioChunk being passed AudioChunks with null mBuffer --- for the stream that's coming from the microphone?
(In reply to Robert O'Callahan (:roc) (Mozilla Corporation) from comment #31)
> OK. Do you see MediaPipelineTransmit::PipelineListener::ProcessAudioChunk
> being passed AudioChunks with null mBuffer --- for the stream that's coming
> from the microphone?

Yes, I do, and they are all non-10 ms sizes (ranging from 120-ish to 190-ish samples).
OK, then I think I need to go into the office and try this on Mac.
Note - the "convert to trackunionstream patch" here appears to be causing mochitest failures per https://bugzilla.mozilla.org/show_bug.cgi?id=814718#c14.
(In reply to Maire Reavy [:mreavy] from comment #32)
> (In reply to Robert O'Callahan (:roc) (Mozilla Corporation) from comment #31)
> > OK. Do you see MediaPipelineTransmit::PipelineListener::ProcessAudioChunk
> > being passed AudioChunks with null mBuffer --- for the stream that's coming
> > from the microphone?
> 
> Yes, I do, and they are all non-10 ms sizes (ranging from 120-ish to 190-ish
> samples).

OK, I just did this experiment on the office Mac Mini with an old USB camera+microphone combo, and I do *not* see null mBuffers in the microphone stream.

Next I'll try running the gUM mochitests and see if I can reproduce problems there.
BTW here's exactly what I did:
1) Opened and started the testcase in comment #15
2) Attached gdb and set a breakpoint in MediaPipeline.cpp on the line after "if (chunk.mBuffer)". This breakpoint gets hit when we get a non-null audio chunk.
3) When that breakpoint gets hit, record the value of 'this' (let's call it T); it's the PipelineListener for the microphone stream. Disable the breakpoint.
4) Set a breakpoint in MediaPipeline.cpp on the line after "// This means silence." This breakpoint gets hit when we get a null audio chunk. Make this breakpoint conditional on this==T.
5) That second breakpoint never fires. It does fire if made unconditional (for the PipelineListener for the fake media stream).
Running the gUM mochitests produces memory leaks. Digging into that, I noticed a couple of things:
1) Nothing is calling Destroy on the MediaInputPort. Fixing that should fix leaks.
2) We create an nsDOMLocalMediaStream wrapper for the SourceMediaStream that receives the actual data. This is unnecessary. Eliminating that wrapper simplifies code, because in a couple of places we can use nsRefPtr<SourceMediaStream> instead of nsRefPtr<nsDOMLocalMediaStream>; the former is thread-safe whereas the latter is not.
I'm working up a patch for these issues.
With that patch, I run the gUM tests without leaking.
Blocks: 812648
Whiteboard: [getUserMedia], [blocking-gum+] → [getUserMedia][blocking-gum+][automation-blocked]
Comment on attachment 696983 [details] [diff] [review]
Part 2: Don't wrap getUserMedia's SourceMediaStream in a DOM object wrapper. Create an nsDOMUserMediaStream wrapper specifically to clean up the SourceMediaStream and the MediaInputPort

Review of attachment 696983 [details] [diff] [review]:
-----------------------------------------------------------------

r+ with removal per this comment

::: dom/media/MediaManager.cpp
@@ +344,5 @@
>  
>      // Dispatch to the media thread to ask it to start the sources,
>      // because that can take a while
>      nsRefPtr<MediaOperationRunnable> runnable(
> +      new MediaOperationRunnable(MEDIA_START, listener->Stream(),

Let's just pass listener (which MediaOperation already knows how to reference for the source), and remove the variant in MediaOperation that gets an explicit stream.
Attachment #696983 - Flags: review?(rjesup) → review+
This bug in itself isn't causing leaks, it's the original patches here when they were tested. I think they are going to be fixed with this patch. If this bug leads to other bugs being filed that end up having leaks from this patch, then flag those.
No longer blocks: 812648
Keywords: mlk
Whiteboard: [getUserMedia][blocking-gum+][automation-blocked] → [getUserMedia][blocking-gum+]
Attached patch part 2 v2Splinter Review
With requested changes
Attachment #696983 - Attachment is obsolete: true
Attachment #697258 - Flags: review+
I found a bug in ProcessAudioChunk. Does this fix help?
Assignee: rjesup → roc
Attachment #697323 - Flags: review?(rjesup)
Comment on attachment 697323 [details] [diff] [review]
ProcessAudioChunk needs to take account of AudioChunk ::mOffset

Good catch.  Does mOffset come (more?) into play with TrackUnion/Processed Audio streams?
Attachment #697323 - Flags: review?(rjesup) → review+
(In reply to Robert O'Callahan (:roc) (Mozilla Corporation) from comment #43)
> Created attachment 697323 [details] [diff] [review]
> ProcessAudioChunk needs to take account of AudioChunk ::mOffset
> 
> I found a bug in ProcessAudioChunk. Does this fix help?

You solved my problem!  It sounds perfect now on the input!  Looking at the fix, it makes sense that an offset would cause the "static" I was hearing. 

The remaining audio degradation I'm hearing (mostly pops and clicks) is now on the output -- almost certainly due to Bug 825611.

Thanks, Rob!!
Blocks: 825594
(In reply to Randell Jesup [:jesup] from comment #44)
> Good catch.  Does mOffset come (more?) into play with TrackUnion/Processed
> Audio streams?

I guess we split buffers in situations where we didn't before, although honestly I'm surprised this wasn't hurting us before.
Simplified while loop
Attachment #696497 - Attachment is obsolete: true
Attachment #696497 - Flags: review?(ekr)
Alternate version.  Also tested.
Attachment #697627 - Flags: review?(ekr)
Attachment #697627 - Attachment is patch: true
Comment on attachment 697627 [details] [diff] [review]
Guarantee audio buffers sent to GIPS are 10ms - version 2

Review of attachment 697627 [details] [diff] [review]:
-----------------------------------------------------------------

I'm r-ing this because I think I've found a bug and I'm concerned about the integer size issues.

The rest of my comments are trivial.

::: media/webrtc/signaling/src/mediapipeline/MediaPipeline.cpp
@@ +685,5 @@
> +  // we got are not multiples of 10ms OR there were samples left over 
> +  // from the last run)
> +  int32_t chunk_remaining;
> +  int32_t tocpy;
> +  int16_t *samples_tmp = samples.get();

Can we move these down towards the site of use?

@@ +692,5 @@
> +  // I realize it may be overkill to check if the rate has changed, but
> +  // I believe it is possible (e.g. if we change sources) and it costs us 
> +  // very little to handle this case
> +  if (samplenum_10ms_ !=  rate/100) {
> +    MOZ_ASSERT(!(rate%100)); // rate should be a multiple of 100

Move this assert out of the block. That way we catch (say) a change from rate 100 to rate 101.

@@ +701,5 @@
> +    samples_10ms_buffer_ = new int16_t[samplenum_10ms_];
> +    buffer_current_ = 0;
> +  }
> +
> +  chunk_remaining = chunk.mDuration;

There is a type conversion error here from int64_t to int32_t. I think there needs to be a check or alternately change
all the values to int64_t.

@@ +708,5 @@
> +    tocpy = std::min(chunk_remaining, samplenum_10ms_ - buffer_current_);
> +    memcpy(&samples_10ms_buffer_[buffer_current_], samples_tmp, tocpy * sizeof(int16_t));
> +    buffer_current_ += tocpy;
> +    samples_tmp += tocpy;
> +    chunk_remaining = chunk.mDuration - tocpy;

how about chunk_remaining -= tocpy.

That way we don't need to worry about the type conversion issue above.

@@ +712,5 @@
> +    chunk_remaining = chunk.mDuration - tocpy;
> +    
> +    if (buffer_current_ == samplenum_10ms_) {
> +      // Send out the audio buffer we just finished filling
> +      conduit->SendAudioFrame(samples_10ms_buffer_, samplenum_10ms_, rate, 0);

You need to set buffer_current_ =  0 here, I believe. Otherwise if we have only enough to fill the buffer, we leave it
set to samplenum_10ms_

@@ +713,5 @@
> +    
> +    if (buffer_current_ == samplenum_10ms_) {
> +      // Send out the audio buffer we just finished filling
> +      conduit->SendAudioFrame(samples_10ms_buffer_, samplenum_10ms_, rate, 0);
> +      // We'll fall out of the 2 if statements and finish sending what we can 

This comment seems unclear.

::: media/webrtc/signaling/src/mediapipeline/MediaPipeline.h
@@ +252,5 @@
>    class PipelineListener : public MediaStreamListener {
>     public:
>      PipelineListener(const RefPtr<MediaSessionConduit>& conduit)
> +        : conduit_(conduit), active_(false), buffer_current_(0),
> +          samplenum_10ms_(0) {}

The convention in this code is to explicitly initialize everything. please add samples_10ms_buffer_()

@@ +279,5 @@
>      volatile bool active_;
> +
> +    // These vars handle breaking audio samples into exact 10ms chunks:
> +    // The buffer of 10ms audio samples that we will send once full 
> +    // (can be carried over from one call to another):

The convention in this  code is for comments to end with "."

Also, heads-up on trailing whitespace throughout this code.

@@ +282,5 @@
> +    // The buffer of 10ms audio samples that we will send once full 
> +    // (can be carried over from one call to another):
> +    nsAutoArrayPtr<int16_t> samples_10ms_buffer_;
> +    // The location of the pointer within that buffer (in units of samples):
> +    int32_t buffer_current_;

Should these be uint32_t? Is there any way for them to be negative?
Attachment #697627 - Flags: review?(ekr) → review-
Comments addressed.
Attachment #697590 - Attachment is obsolete: true
Attachment #697627 - Attachment is obsolete: true
Attachment #697689 - Flags: review?(ekr)
Attachment #697689 - Attachment is patch: true
Comment on attachment 697689 [details] [diff] [review]
Guarantee audio buffers sent to GIPS are 10ms - version 3

Review of attachment 697689 [details] [diff] [review]:
-----------------------------------------------------------------

lgtm with the issues below fixed.

::: media/webrtc/signaling/src/mediapipeline/MediaPipeline.cpp
@@ +703,5 @@
> +  int64_t chunk_remaining;
> +  int64_t tocpy;
> +  int16_t *samples_tmp = samples.get();
> +
> +  chunk_remaining = chunk.mDuration;

I think we should test for chunk_remaining < 0? This code is going to behave very badly in that case.

@@ +729,5 @@
> +    conduit->SendAudioFrame(samples_tmp, tocpy, rate, 0);
> +    samples_tmp += tocpy;
> +    chunk_remaining = chunk.mDuration - tocpy;
> +  }
> +  // Copy what remains for the next run

Let's MOZ_ASSERT(chunk_remaining < samplenum_10ms_);

::: media/webrtc/signaling/src/mediapipeline/MediaPipeline.h
@@ +252,5 @@
>    class PipelineListener : public MediaStreamListener {
>     public:
>      PipelineListener(const RefPtr<MediaSessionConduit>& conduit)
> +        : conduit_(conduit), active_(false), buffer_current_(0),
> +          samplenum_10ms_(0), samples_10ms_buffer_(nullptr) {}

Not having initialization order in same order as variable order generates errors on some compilers. Can you fix?
Attachment #697689 - Flags: review?(ekr) → review+
Changes made per last review.  Carrying forward r=ekr
Attachment #697689 - Attachment is obsolete: true
Attachment #697699 - Flags: review+
Attachment #697699 - Flags: checkin?(rjesup)
Comment on attachment 697699 [details] [diff] [review]
Guarantee audio buffers sent to GIPS are 10ms - version 4

Review of attachment 697699 [details] [diff] [review]:
-----------------------------------------------------------------

::: media/webrtc/signaling/src/mediapipeline/MediaPipeline.cpp
@@ +687,5 @@
> +  // I realize it may be overkill to check if the rate has changed, but
> +  // I believe it is possible (e.g. if we change sources) and it costs us
> +  // very little to handle this case
> +
> +  if (samplenum_10ms_ !=  rate/100) {

I think that may cause us to always enter the if block. samplenum_10ms_ is int64 and converted to a float to compare it with the result of the division which is a floating point. For the division we can have really minor differences. See http://floating-point-gui.de/errors/comparison/

Shouldn't we convert the result of the division back to an int64 before the comparison? Also the division should be done before the if block so we wouldn't have to do the same operation twice in following lines (dependent on how smart the compiler is).

@@ +732,5 @@
> +    samples_tmp += tocpy;
> +    chunk_remaining = chunk.mDuration - tocpy;
> +  }
> +  // Copy what remains for the next run
> +  

nit: there is a white-space issue.

::: media/webrtc/signaling/src/mediapipeline/MediaPipeline.h
@@ +251,5 @@
>    // Separate class to allow ref counting
>    class PipelineListener : public MediaStreamListener {
>     public:
>      PipelineListener(const RefPtr<MediaSessionConduit>& conduit)
> +      : conduit_(conduit), active_(false), samples_10ms_buffer_(nullptr),  

nit: white-space issue too.
Comment on attachment 697699 [details] [diff] [review]
Guarantee audio buffers sent to GIPS are 10ms - version 4

Review of attachment 697699 [details] [diff] [review]:
-----------------------------------------------------------------

::: media/webrtc/signaling/src/mediapipeline/MediaPipeline.cpp
@@ +687,5 @@
> +  // I realize it may be overkill to check if the rate has changed, but
> +  // I believe it is possible (e.g. if we change sources) and it costs us
> +  // very little to handle this case
> +
> +  if (samplenum_10ms_ !=  rate/100) {

Trackrate is int32_t. In C, division of two integers produces an integral result.

WRT to moving the division, I don't see a lot of benefit here. Rate changes are going to happen very rarely (indeed, I suggested that Maire just put an assert here) so even removing the computation isn't going to help much, and I would expect the compiler to be able to optimize the computation in any case.
Comment on attachment 697963 [details] [diff] [review]
Have MediaPipeline deliver as much media as requested

Review of attachment 697963 [details] [diff] [review]:
-----------------------------------------------------------------

This should address roc's comment #23
Attachment #697963 - Flags: review?(rjesup)
Attachment #698019 - Flags: review?(ekr)
Comment on attachment 697963 [details] [diff] [review]
Have MediaPipeline deliver as much media as requested

Review of attachment 697963 [details] [diff] [review]:
-----------------------------------------------------------------

The initial value of played_ needs to be dealt with, and probably the slip as well

::: media/webrtc/signaling/src/mediapipeline/MediaPipeline.cpp
@@ +824,2 @@
>  void MediaPipelineReceiveAudio::PipelineListener::
>  NotifyPull(MediaStreamGraph* graph, StreamTime total) {

"desired_time" or some such would be more clear than 'total'

@@ +828,5 @@
>      MOZ_MTLOG(PR_LOG_ERROR, "NotifyPull() called from a non-SourceMediaStream");
>      return;
>    }
>  
> +  while (played_ < total) {

How is played_ initialized?  What is the initial value of 'total' we see here?  If it's not roughly 0 + enough data to start, then this is wrong.  Since this is a realtime stream, we probably want to set the initial value of played_ to total (or total - 10ms)

@@ +853,5 @@
>  
>      source_->AppendToTrack(1,  // TODO(ekr@rtfm.com): Track ID
>                             &segment);
> +
> +    played_ += MillisecondsToMediaTime(10);

If there's a non-exactness to 10ms in StreamTime (and per roc I think there is) this may cause a slip over time, since we'll always be telling it slightly more or less than 10ms.

The "canonically correct" way to do it would be to keep a time count, convert that to StreamTime, and then compute the difference from the last stream time; roughly:
played_ += MillisecondsToMediaTime(played_ms + 10) - MillisecondsToMediaTime(played_ms)
played_ms += 10;

If we verify there is no inexactness, this is moot.  I think it is inexact currently, though roc would like to fix that (I think)
Attachment #697963 - Flags: review?(rjesup) → review-
FYI, I think we should open a new bug for the NotifyPull stuff.  When you have a new patch, open a bug or clone from this one.  (Probably just open new, quote or link a few relevant comments)
Actually, the bug is already open: bug 825611
Comment on attachment 698019 [details]
Fix for chunk_remaining

lgtm
Attachment #698019 - Flags: review?(ekr) → review+
Attachment #697699 - Flags: checkin?(rjesup) → checkin+
Blocks: 827007
Blocks: 826576
Keywords: verifyme
Blocks: 827203
Verified on 1/29 on Win 7 64-bit - generating sound while audio is paused and unpausing - not hearing the sound, which implies the buffering is not happening.
Status: RESOLVED → VERIFIED
Keywords: verifyme
What can be automated here was covered in bug 822109.
Flags: in-testsuite? → in-testsuite+
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: