Logo Search packages:      
Sourcecode: blender version File versions

void* av_malloc ( unsigned int  size  ) 

Memory allocation of size byte with alignment suitable for all memory accesses (including vectors if available on the CPU). av_malloc(0) must return a non NULL pointer.

Definition at line 45 of file mem.c.

References av_malloc().

Referenced by alloc_bitplane(), av_find_stream_info(), av_malloc(), av_new_packet(), av_parser_change(), av_realloc(), avcodec_alloc_context(), avcodec_alloc_frame(), avpicture_alloc(), ff_fft_init(), ff_mdct_init(), main(), MPV_common_init(), and vc9_decode_init().

    void *ptr;
    long diff;

    /* lets disallow possible ambiguous cases */
    if(size > INT_MAX)
        return NULL;

    ptr = malloc(size+16+1);
    diff= ((-(long)ptr - 1)&15) + 1;
    ptr += diff;
    ((char*)ptr)[-1]= diff;
#elif defined (HAVE_MEMALIGN)
    ptr = memalign(16,size);
    /* Why 64?
       Indeed, we should align it:
         on 4 for 386
         on 16 for 486
         on 32 for 586, PPro - k6-III
         on 64 for K7 (maybe for P3 too).
       Because L1 and L2 caches are aligned on those values.
       But I don't want to code such logic here!
     /* Why 16?
        because some cpus need alignment, for example SSE2 on P4, & most RISC cpus
        it will just trigger an exception and the unaligned load will be done in the
        exception handler or it will just segfault (SSE2 on P4)
        Why not larger? because i didnt see a difference in benchmarks ...
     /* benchmarks with p3
        memalign(64)+1          3071,3051,3032
        memalign(64)+2          3051,3032,3041
        memalign(64)+4          2911,2896,2915
        memalign(64)+8          2545,2554,2550
        memalign(64)+16         2543,2572,2563
        memalign(64)+32         2546,2545,2571
        memalign(64)+64         2570,2533,2558

        btw, malloc seems to do 8 byte alignment by default here
    ptr = malloc(size);
    return ptr;

Generated by  Doxygen 1.6.0   Back to index