Well Behaved Filters

Destroy Buckets, Reuse Brigades

Don't get fooled by the signs

Ownership of buckets is easy. Once you take them out of a brigade you are responsible for either destroying or passing them. This is different with brigades. If you call ap_pass_brigade on a brigade, the brigades bucket are moved into a downstream brigade. Your filter does however still own the brigade.  You can either destroy it or call apr_brigade_cleanup (just to be shure) to reuse it. 

As for the incoming brigade passed into your filter you should call apr_brigade cleanup on it when you are done with it. This way, the upsteam filter finds it emptied as it expects.

If you want your filter to be capable of streaming or handling very large amounts of data, you actually must reuse the brigades you create. The source code for apr_brigade_create actually tells you, that from a memory point of view a brigade is indestructible. It is created from pool memory and this can not be freed. So not reusing the brigade in a stream is a memory leak.

When to Return

What you should return from a filter processing a brigade is pretty straight forward. If you encounter an error you return a value != APR_SUCCESS. If you call ap_pass_brigade and it does not return APR_SUCCESS, you return from your filter passing this value. 

But unfortunately that is not enough. There is at least one case where passing the brigade returns a questionable APR_SUCCESS. Questionable means yo have to ask further questions to check if you can continue. The core network filter considers it sucess, when the client closes the connection. So if you want to avoid unecessary and possibly expensive processing your code has to do the following:
 

  rv = ap_pass_brigade(bb);
  if (  rv != APR_SUCCESS
        || f->c->aborted   ) 
  {
     return rv ;
  }

Morphing Makes a Moving Target

The usual filter loop
  for ( b = APR_BRIGADE_FIRST(bb) ;
        b != APR_BRIGADE_SENTINEL(bb) ;
        b = APR_BUCKET_NEXT(b) ) {
looks pretty straight forward. If you do however want to process large files, it is not. When you start, your filter holds a single file bucket that holds a file handle. You call apr_bucket_read and the bucket you are processing turns into a heap bucket, holding the first block of data, while a file bucket for the rest of the file is appended to it. If you want to be able to process large files, you can't just keep that bucket in the brigade. You would end up with the whole file in memory. So you need to delete it. Doing this is however not that easy, because that bucket is the only one that knows who's next. The code for is left as an exercise for the reader.