使用ExtAudioFileWrite for iOS将音频样本的缓冲区写入aac文件
Writing buffer of audio samples to aac file using ExtAudioFileWrite for iOS
更新:我已经弄清楚了这一点,并发布了我的解决方案作为我自己问题的答案(如下)
我正在尝试使用AAC格式的ExtAudioFileWrite将音频样本的简单缓冲区写入文件。
我已经用下面的代码实现了这一点,将单声道缓冲区写入.wav文件-然而,我不能为立体声或AAC文件这样做,这正是我想要做的。
这是我迄今为止所拥有的。。。
CFStringRef fPath;
fPath = CFStringCreateWithCString(kCFAllocatorDefault,
"/path/to/my/audiofile/audiofile.wav",
kCFStringEncodingMacRoman);
OSStatus err;
int mChannels = 1;
UInt32 totalFramesInFile = 100000;
Float32 *outputBuffer = (Float32 *)malloc(sizeof(Float32) * (totalFramesInFile*mChannels));
////////////// Set up Audio Buffer List ////////////
AudioBufferList outputData;
outputData.mNumberBuffers = 1;
outputData.mBuffers[0].mNumberChannels = mChannels;
outputData.mBuffers[0].mDataByteSize = 4 * totalFramesInFile * mChannels;
outputData.mBuffers[0].mData = outputBuffer;
Float32 audioFile[totalFramesInFile*mChannels];
for (int i = 0;i < totalFramesInFile*mChannels;i++)
{
audioFile[i] = ((Float32)(rand() % 100))/100.0;
audioFile[i] = audioFile[i]*0.2;
}
outputData.mBuffers[0].mData = &audioFile;
CFURLRef fileURL = CFURLCreateWithFileSystemPath(kCFAllocatorDefault,fPath,kCFURLPOSIXPathStyle,false);
ExtAudioFileRef audiofileRef;
// WAVE FILES
AudioFileTypeID fileType = kAudioFileWAVEType;
AudioStreamBasicDescription clientFormat;
clientFormat.mSampleRate = 44100.0;
clientFormat.mFormatID = kAudioFormatLinearPCM;
clientFormat.mFormatFlags = 12;
clientFormat.mBitsPerChannel = 16;
clientFormat.mChannelsPerFrame = mChannels;
clientFormat.mBytesPerFrame = 2*clientFormat.mChannelsPerFrame;
clientFormat.mFramesPerPacket = 1;
clientFormat.mBytesPerPacket = 2*clientFormat.mChannelsPerFrame;
// open the file for writing
err = ExtAudioFileCreateWithURL((CFURLRef)fileURL, fileType, &clientFormat, NULL, kAudioFileFlags_EraseFile, &audiofileRef);
if (err != noErr)
{
cout << "Problem when creating audio file: " << err << "n";
}
// tell the ExtAudioFile API what format we'll be sending samples in
err = ExtAudioFileSetProperty(audiofileRef, kExtAudioFileProperty_ClientDataFormat, sizeof(clientFormat), &clientFormat);
if (err != noErr)
{
cout << "Problem setting audio format: " << err << "n";
}
UInt32 rFrames = (UInt32)totalFramesInFile;
// write the data
err = ExtAudioFileWrite(audiofileRef, rFrames, &outputData);
if (err != noErr)
{
cout << "Problem writing audio file: " << err << "n";
}
// close the file
ExtAudioFileDispose(audiofileRef);
NSLog(@"Done!");
我的具体问题是:
- 如何为AAC设置AudioStreamBasicDescription
- 为什么我不能让立体声在这里正常工作?如果我将通道数("通道")设置为2,则我可以正确地获得左通道,并在右通道中获得失真
我非常感谢任何帮助-我想我几乎已经阅读了我能找到的关于这方面的每一页,但我并不知道,因为虽然有类似的问题,但他们通常从一些输入音频文件中导出AudioStreamBasicDescription参数,我看不到结果。苹果的文档也无济于事。
非常感谢,
Adam
好吧,经过一番探索,我已经弄明白了。我把它包装成一个函数,将随机噪声写入文件。具体来说,它可以:
- 写入.wav或.m4a文件
- 以任一格式编写单声道或立体声
- 将文件写入指定的路径
函数参数为:
- 要创建的音频文件的路径
- 通道数量(最多2个)
- boolean:使用m4a压缩(如果为false,则使用pcm)
对于立体声M4A文件,函数应调用为:
writeNoiseToAudioFile("/path/to/my/audiofile.m4a",2,true);
函数的来源如下。我试着尽可能多地评论它——我希望它是正确的,它当然对我有效,但如果我错过了什么,请说"亚当,你做错了一点"。祝你好运这是代码:
void writeNoiseToAudioFile(char *fName,int mChannels,bool compress_with_m4a)
{
OSStatus err; // to record errors from ExtAudioFile API functions
// create file path as CStringRef
CFStringRef fPath;
fPath = CFStringCreateWithCString(kCFAllocatorDefault,
fName,
kCFStringEncodingMacRoman);
// specify total number of samples per channel
UInt32 totalFramesInFile = 100000;
/////////////////////////////////////////////////////////////////////////////
////////////// Set up Audio Buffer List For Interleaved Audio ///////////////
/////////////////////////////////////////////////////////////////////////////
AudioBufferList outputData;
outputData.mNumberBuffers = 1;
outputData.mBuffers[0].mNumberChannels = mChannels;
outputData.mBuffers[0].mDataByteSize = sizeof(AudioUnitSampleType)*totalFramesInFile*mChannels;
/////////////////////////////////////////////////////////////////////////////
//////// Synthesise Noise and Put It In The AudioBufferList /////////////////
/////////////////////////////////////////////////////////////////////////////
// create an array to hold our audio
AudioUnitSampleType audioFile[totalFramesInFile*mChannels];
// fill the array with random numbers (white noise)
for (int i = 0;i < totalFramesInFile*mChannels;i++)
{
audioFile[i] = ((AudioUnitSampleType)(rand() % 100))/100.0;
audioFile[i] = audioFile[i]*0.2;
// (yes, I know this noise has a DC offset, bad)
}
// set the AudioBuffer to point to the array containing the noise
outputData.mBuffers[0].mData = &audioFile;
/////////////////////////////////////////////////////////////////////////////
////////////////// Specify The Output Audio File Format /////////////////////
/////////////////////////////////////////////////////////////////////////////
// the client format will describe the output audio file
AudioStreamBasicDescription clientFormat;
// the file type identifier tells the ExtAudioFile API what kind of file we want created
AudioFileTypeID fileType;
// if compress_with_m4a is tru then set up for m4a file format
if (compress_with_m4a)
{
// the file type identifier tells the ExtAudioFile API what kind of file we want created
// this creates a m4a file type
fileType = kAudioFileM4AType;
// Here we specify the M4A format
clientFormat.mSampleRate = 44100.0;
clientFormat.mFormatID = kAudioFormatMPEG4AAC;
clientFormat.mFormatFlags = kMPEG4Object_AAC_Main;
clientFormat.mChannelsPerFrame = mChannels;
clientFormat.mBytesPerPacket = 0;
clientFormat.mBytesPerFrame = 0;
clientFormat.mFramesPerPacket = 1024;
clientFormat.mBitsPerChannel = 0;
clientFormat.mReserved = 0;
}
else // else encode as PCM
{
// this creates a wav file type
fileType = kAudioFileWAVEType;
// This function audiomatically generates the audio format according to certain arguments
FillOutASBDForLPCM(clientFormat,44100.0,mChannels,32,32,true,false,false);
}
/////////////////////////////////////////////////////////////////////////////
///////////////// Specify The Format of Our Audio Samples ///////////////////
/////////////////////////////////////////////////////////////////////////////
// the local format describes the format the samples we will give to the ExtAudioFile API
AudioStreamBasicDescription localFormat;
FillOutASBDForLPCM (localFormat,44100.0,mChannels,32,32,true,false,false);
/////////////////////////////////////////////////////////////////////////////
///////////////// Create the Audio File and Open It /////////////////////////
/////////////////////////////////////////////////////////////////////////////
// create the audio file reference
ExtAudioFileRef audiofileRef;
// create a fileURL from our path
CFURLRef fileURL = CFURLCreateWithFileSystemPath(kCFAllocatorDefault,fPath,kCFURLPOSIXPathStyle,false);
// open the file for writing
err = ExtAudioFileCreateWithURL((CFURLRef)fileURL, fileType, &clientFormat, NULL, kAudioFileFlags_EraseFile, &audiofileRef);
if (err != noErr)
{
cout << "Problem when creating audio file: " << err << "n";
}
/////////////////////////////////////////////////////////////////////////////
///// Tell the ExtAudioFile API what format we'll be sending samples in /////
/////////////////////////////////////////////////////////////////////////////
// Tell the ExtAudioFile API what format we'll be sending samples in
err = ExtAudioFileSetProperty(audiofileRef, kExtAudioFileProperty_ClientDataFormat, sizeof(localFormat), &localFormat);
if (err != noErr)
{
cout << "Problem setting audio format: " << err << "n";
}
/////////////////////////////////////////////////////////////////////////////
///////// Write the Contents of the AudioBufferList to the AudioFile ////////
/////////////////////////////////////////////////////////////////////////////
UInt32 rFrames = (UInt32)totalFramesInFile;
// write the data
err = ExtAudioFileWrite(audiofileRef, rFrames, &outputData);
if (err != noErr)
{
cout << "Problem writing audio file: " << err << "n";
}
/////////////////////////////////////////////////////////////////////////////
////////////// Close the Audio File and Get Rid Of The Reference ////////////
/////////////////////////////////////////////////////////////////////////////
// close the file
ExtAudioFileDispose(audiofileRef);
NSLog(@"Done!");
}
不要忘记导入AudioToolbox框架并包含头文件:
#import <AudioToolbox/AudioToolbox.h>
相关文章:
- C++字符*缓冲区的大小
- 为什么msgrcv()将垃圾字符馈送到缓冲区
- 使用动态分配的数组会导致代码分析发出虚假的C6386缓冲区溢出警告
- ostream过载时的缓冲区冲洗
- C++中的高效循环缓冲区,它将被传递给C样式数组函数参数
- Xaudio2在更改缓冲区或循环时弹出声音
- 为什么我在leetcode上收到AddressSanitizer:地址0x602000000058上的堆缓冲区溢出错误
- 如何将图像传输到c++(dll)中的缓冲区,然后在c#的缓冲区中读/写
- 如何在cpp.中使用协议缓冲区存储大缓冲区/数组(char/int)
- 多线程双缓冲区
- Android P-9.0.0_r53 Logcat主缓冲区超出定义大小
- 套接字读取后,我在缓冲区中看到意外输入
- std::带有自定义缓冲区的 iostream 不允许我写入
- 从返回的顶点缓冲区查询顶点结构
- Vulkan 中的动态顶点缓冲区格式设置
- OpenGL 16 位模板缓冲区?
- 在 leetcode 上提交解决方案时出现堆栈缓冲区溢出错误
- 在 openGL 中多次绑定缓冲区
- struct.error:解压缩 C++ 结构时,解包需要 288 字节的缓冲区
- 使用ExtAudioFileWrite for iOS将音频样本的缓冲区写入aac文件