Android 音视频采集基本流程

围巾🧣 2024年06月26日 233次浏览

在Android平台上进行音视频采集,通常涉及到使用设备的摄像头和麦克风来捕获视频和音频数据。这个过程可以分为以下几个步骤:

1. 初始化和权限管理

在进行音视频采集之前,需要确保应用具有适当的权限,包括摄像头权限和录音权限。这些权限需要在AndroidManifest.xml文件中声明,并在运行时请求用户授权。

AndroidManifest.xml 中声明权限:

<uses-permission android:name="android.permission.CAMERA"/>
<uses-permission android:name="android.permission.RECORD_AUDIO"/>
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/>

在代码中请求权限(以Android 6.0及以上为例):

if (ContextCompat.checkSelfPermission(this, Manifest.permission.CAMERA) != PackageManager.PERMISSION_GRANTED ||
    ContextCompat.checkSelfPermission(this, Manifest.permission.RECORD_AUDIO) != PackageManager.PERMISSION_GRANTED) {
    ActivityCompat.requestPermissions(this, new String[]{
        Manifest.permission.CAMERA,
        Manifest.permission.RECORD_AUDIO
    }, REQUEST_CODE_PERMISSIONS);
}

2. 设置摄像头(Camera)和麦克风(AudioRecord)

Android提供了两种主要的API来处理视频采集:Camera API(旧的API)和 Camera2 API(新的API)。下面是使用 Camera2 API 进行视频采集的基本流程。

初始化 Camera2 API:

private void initializeCamera() {
    CameraManager cameraManager = (CameraManager) getSystemService(Context.CAMERA_SERVICE);
    try {
        String cameraId = cameraManager.getCameraIdList()[0]; // 获取前置/后置摄像头ID
        CameraCharacteristics characteristics = cameraManager.getCameraCharacteristics(cameraId);
        StreamConfigurationMap map = characteristics.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP);
        Size previewSize = map.getOutputSizes(SurfaceTexture.class)[0];

        // 配置捕获会话
        cameraManager.openCamera(cameraId, new CameraDevice.StateCallback() {
            @Override
            public void onOpened(@NonNull CameraDevice camera) {
                cameraDevice = camera;
                createCameraPreviewSession();
            }

            @Override
            public void onDisconnected(@NonNull CameraDevice camera) {
                camera.close();
                cameraDevice = null;
            }

            @Override
            public void onError(@NonNull CameraDevice camera, int error) {
                camera.close();
                cameraDevice = null;
            }
        }, null);
    } catch (CameraAccessException e) {
        e.printStackTrace();
    }
}

private void createCameraPreviewSession() {
    SurfaceTexture texture = textureView.getSurfaceTexture();
    texture.setDefaultBufferSize(previewSize.getWidth(), previewSize.getHeight());
    Surface surface = new Surface(texture);

    try {
        captureRequestBuilder = cameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
        captureRequestBuilder.addTarget(surface);

        cameraDevice.createCaptureSession(Arrays.asList(surface), new CameraCaptureSession.StateCallback() {
            @Override
            public void onConfigured(@NonNull CameraCaptureSession session) {
                cameraCaptureSession = session;
                try {
                    cameraCaptureSession.setRepeatingRequest(captureRequestBuilder.build(), null, null);
                } catch (CameraAccessException e) {
                    e.printStackTrace();
                }
            }

            @Override
            public void onConfigureFailed(@NonNull CameraCaptureSession session) {
                // Handle failure
            }
        }, null);
    } catch (CameraAccessException e) {
        e.printStackTrace();
    }
}

初始化音频录制:

private void initializeAudioRecord() {
    int bufferSize = AudioRecord.getMinBufferSize(SAMPLE_RATE_IN_HZ, CHANNEL_CONFIG, AUDIO_FORMAT);
    audioRecord = new AudioRecord(MediaRecorder.AudioSource.MIC, SAMPLE_RATE_IN_HZ, CHANNEL_CONFIG, AUDIO_FORMAT, bufferSize);
}

private void startAudioRecording() {
    audioRecord.startRecording();
    isRecording = true;
    new Thread(new AudioRecordRunnable()).start();
}

private void stopAudioRecording() {
    if (audioRecord != null) {
        isRecording = false;
        audioRecord.stop();
        audioRecord.release();
        audioRecord = null;
    }
}

private class AudioRecordRunnable implements Runnable {
    @Override
    public void run() {
        byte[] buffer = new byte[BUFFER_SIZE];
        while (isRecording) {
            int read = audioRecord.read(buffer, 0, buffer.length);
            if (read > 0) {
                // 处理音频数据
            }
        }
    }
}

3. 音视频数据处理

在捕获到音频和视频数据之后,可以对这些数据进行处理,比如编码、存储或实时传输。常见的处理方式包括使用 MediaCodec 进行编码,使用 MediaMuxer 进行封装,以及将数据发送到服务器进行直播。

4. 释放资源

在完成音视频采集后,需要释放资源以避免内存泄漏。

private void closeCamera() {
    if (cameraCaptureSession != null) {
        cameraCaptureSession.close();
        cameraCaptureSession = null;
    }
    if (cameraDevice != null) {
        cameraDevice.close();
        cameraDevice = null;
    }
}

@Override
protected void onPause() {
    super.onPause();
    closeCamera();
    stopAudioRecording();
}

5. 示例代码整合

下面是一个完整的示例代码,展示如何使用 ImageReader 捕捉每一帧视频数据,并使用 MediaCodec 将原始的 YUV420 数据编码为 H.264 格式。

1. 使用 ImageReader 捕捉视频帧

我们需要在 createCameraPreviewSession 方法中创建 ImageReader 并设置 OnImageAvailableListener 来捕捉每一帧的数据。

2. 使用 MediaCodec 编码 YUV420 到 H.264

我们需要初始化一个 MediaCodec 编码器,并将从 ImageReader 获取的 YUV420 数据传递给它进行编码。

以下是完整的示例代码:

AVActivity.java

public class AVActivity extends AppCompatActivity {
    private static final String TAG = "AVActivity";

    private TextureView textureView;
    private CameraDevice cameraDevice;
    private CameraCaptureSession cameraCaptureSession;
    private CaptureRequest.Builder captureRequestBuilder;
    private ImageReader imageReader;

    private AudioRecord audioRecord;
    private boolean isAudioRecording;
    private boolean isVideoRecording;

    private MediaCodec mediaCodec;
    private MediaCodec.BufferInfo bufferInfo;
    private FileOutputStream fileOutputStream;

    HandlerThread h264EncodeThread = new HandlerThread("H264EncodeThread");
    Handler h264Handler;
    HandlerThread aacEncodeThread = new HandlerThread("AACEncodeThread");
    Handler aacHandler;

    private static final int SAMPLE_RATE_IN_HZ = 44100;
    private static final int CHANNEL_CONFIG = AudioFormat.CHANNEL_IN_MONO;
    private static final int AUDIO_FORMAT = AudioFormat.ENCODING_PCM_16BIT;
    private static final int BUFFER_SIZE = AudioRecord.getMinBufferSize(SAMPLE_RATE_IN_HZ, CHANNEL_CONFIG, AUDIO_FORMAT);
    private static final int VIDEO_WIDTH = 1280;
    private static final int VIDEO_HEIGHT = 720;
    private Size previewSize;
    private H264ToMp4Converter converter;

    // 音频
    private MediaCodec audioCodec;
    private MediaCodec.BufferInfo audioBufferInfo;
    private MediaFormat audioFormat;


    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_av);
        textureView = findViewById(R.id.texture_view);

        h264EncodeThread.start();
        h264Handler = new Handler(h264EncodeThread.getLooper());
        aacEncodeThread.start();
        aacHandler = new Handler(aacEncodeThread.getLooper());

        converter = new H264ToMp4Converter();

        // 检查和请求权限
        if (ContextCompat.checkSelfPermission(this, Manifest.permission.CAMERA) != PackageManager.PERMISSION_GRANTED ||
                ContextCompat.checkSelfPermission(this, Manifest.permission.RECORD_AUDIO) != PackageManager.PERMISSION_GRANTED) {
            ActivityCompat.requestPermissions(this, new String[]{
                    Manifest.permission.CAMERA,
                    Manifest.permission.RECORD_AUDIO
            }, 1);
        } else {
            initializeAudioRecord();
        }
    }

    @Override
    protected void onResume() {
        super.onResume();
        if (textureView.isAvailable()) {
            initializeCamera();
        } else {
            textureView.setSurfaceTextureListener(surfaceTextureListener);
        }
    }

    @Override
    protected void onPause() {
        super.onPause();
        closeCamera();
        stopVideoRecording();
        stopAudioRecording();
        releaseMediaCodec();

        if (converter != null) {
            converter.stopMuxer();
        }
    }

    private TextureView.SurfaceTextureListener surfaceTextureListener = new TextureView.SurfaceTextureListener() {
        @Override
        public void onSurfaceTextureAvailable(SurfaceTexture surface, int width, int height) {
            initializeCamera();
            initializeMediaCodec();
            initAudioEncoder();
        }

        @Override
        public void onSurfaceTextureSizeChanged(SurfaceTexture surface, int width, int height) {
        }

        @Override
        public boolean onSurfaceTextureDestroyed(SurfaceTexture surface) {
            return false;
        }

        @Override
        public void onSurfaceTextureUpdated(SurfaceTexture surface) {
        }
    };

    @SuppressLint("MissingPermission")
    private void initializeCamera() {
        CameraManager cameraManager = (CameraManager) getSystemService(Context.CAMERA_SERVICE);
        try {
            String cameraId = cameraManager.getCameraIdList()[0];
            CameraCharacteristics characteristics = cameraManager.getCameraCharacteristics(cameraId);
            StreamConfigurationMap map = characteristics.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP);
            // 获取镜头支持的尺寸
            previewSize = map.getOutputSizes(SurfaceTexture.class)[0];

            // 初始化 ImageReader 用于捕捉视频帧
            imageReader = ImageReader.newInstance(VIDEO_WIDTH, VIDEO_HEIGHT, ImageFormat.YUV_420_888, 2);
            // 放到子线程编码
            imageReader.setOnImageAvailableListener(onImageAvailableListener, h264Handler);

            cameraManager.openCamera(cameraId, new CameraDevice.StateCallback() {
                @Override
                public void onOpened(@NonNull CameraDevice camera) {
                    cameraDevice = camera;
                    createCameraPreviewSession();
                }

                @Override
                public void onDisconnected(@NonNull CameraDevice camera) {
                    camera.close();
                    cameraDevice = null;
                }

                @Override
                public void onError(@NonNull CameraDevice camera, int error) {
                    camera.close();
                    cameraDevice = null;
                }
            }, null);
        } catch (CameraAccessException e) {
            e.printStackTrace();
        }
    }

    private void createCameraPreviewSession() {
        SurfaceTexture texture = textureView.getSurfaceTexture();
        texture.setDefaultBufferSize(VIDEO_WIDTH, VIDEO_HEIGHT);
        Surface surface = new Surface(texture);

        Surface imageReaderSurface = imageReader.getSurface();

        try {
            captureRequestBuilder = cameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
            captureRequestBuilder.addTarget(surface);
            captureRequestBuilder.addTarget(imageReaderSurface);

            cameraDevice.createCaptureSession(Arrays.asList(surface, imageReaderSurface), new CameraCaptureSession.StateCallback() {
                @Override
                public void onConfigured(@NonNull CameraCaptureSession session) {
                    cameraCaptureSession = session;
                    try {
                        cameraCaptureSession.setRepeatingRequest(captureRequestBuilder.build(), null, null);
                    } catch (CameraAccessException e) {
                        e.printStackTrace();
                    }
                }

                @Override
                public void onConfigureFailed(@NonNull CameraCaptureSession session) {
                }
            }, null);
        } catch (CameraAccessException e) {
            e.printStackTrace();
        }
    }

    private ImageReader.OnImageAvailableListener onImageAvailableListener = reader -> {
        Image image = null;
        try {
            image = reader.acquireLatestImage();
            if (image != null) {
                // 将 YUV 数据编码为 H.264
                if (isVideoRecording) {
                    encodeImageToH264(image);
                }
            }
        } finally {
            if (image != null) {
                image.close();
            }
        }
    };

    @SuppressLint("MissingPermission")
    private void initializeAudioRecord() {
        int bufferSize = AudioRecord.getMinBufferSize(SAMPLE_RATE_IN_HZ, CHANNEL_CONFIG, AUDIO_FORMAT);
        audioRecord = new AudioRecord(MediaRecorder.AudioSource.MIC, SAMPLE_RATE_IN_HZ, CHANNEL_CONFIG, AUDIO_FORMAT, bufferSize);
    }

    private void starVideoRecording() {
        isVideoRecording = true;
    }

    private void stopVideoRecording() {
        isVideoRecording = false;
    }

    private void startAudioRecording() {
        audioRecord.startRecording();
        isAudioRecording = true;
        aacHandler.post(new AudioRecordRunnable());
    }

    private void stopAudioRecording() {
        if (audioRecord != null) {
            isAudioRecording = false;
            audioRecord.stop();
            audioRecord.release();
            audioRecord = null;
        }
    }

    private class AudioRecordRunnable implements Runnable {
        @Override
        public void run() {
            byte[] buffer = new byte[BUFFER_SIZE];
            while (isAudioRecording) {
                int read = audioRecord.read(buffer, 0, buffer.length);
                if (read > 0) {
                    encodeAudio(buffer, read);
                }
            }
        }
    }


    private void closeCamera() {
        if (cameraCaptureSession != null) {
            cameraCaptureSession.close();
            cameraCaptureSession = null;
        }
        if (cameraDevice != null) {
            cameraDevice.close();
            cameraDevice = null;
        }
    }

    private void initializeMediaCodec() {
        try {
            mediaCodec = MediaCodec.createEncoderByType(MediaFormat.MIMETYPE_VIDEO_AVC);
            MediaFormat mediaFormat = MediaFormat.createVideoFormat(MediaFormat.MIMETYPE_VIDEO_AVC, VIDEO_WIDTH, VIDEO_HEIGHT);
            mediaFormat.setInteger(MediaFormat.KEY_BIT_RATE, 18000000);
            mediaFormat.setInteger(MediaFormat.KEY_FRAME_RATE, 60);
            mediaFormat.setInteger(MediaFormat.KEY_COLOR_FORMAT, MediaCodecInfo.CodecCapabilities.COLOR_FormatYUV420Flexible);
            mediaFormat.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, 5);
            mediaCodec.configure(mediaFormat, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
            mediaCodec.start();

            bufferInfo = new MediaCodec.BufferInfo();
            File directory = Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_DOWNLOADS);
//            fileOutputStream = new FileOutputStream(new File(directory, "output.h264"));
        } catch (IOException e) {
            e.printStackTrace();
        }

        starVideoRecording();
    }

    private void initAudioEncoder() {
        try {
            audioBufferInfo = new MediaCodec.BufferInfo();
            audioFormat = new MediaFormat();
            audioFormat.setString(MediaFormat.KEY_MIME, MediaFormat.MIMETYPE_AUDIO_AAC);
            audioFormat.setInteger(MediaFormat.KEY_SAMPLE_RATE, SAMPLE_RATE_IN_HZ);
            audioFormat.setInteger(MediaFormat.KEY_CHANNEL_COUNT, 1);
            audioFormat.setInteger(MediaFormat.KEY_BIT_RATE, 96000);
            audioFormat.setInteger(MediaFormat.KEY_AAC_PROFILE, MediaCodecInfo.CodecProfileLevel.AACObjectLC);
            audioFormat.setInteger(MediaFormat.KEY_MAX_INPUT_SIZE, 1024 * 100);//作用于inputBuffer的大小

            audioCodec = MediaCodec.createEncoderByType(MediaFormat.MIMETYPE_AUDIO_AAC);
            audioCodec.configure(audioFormat, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
            audioCodec.start();
        } catch (IOException e) {
            e.printStackTrace();
        }

        startAudioRecording();
    }


    private void releaseMediaCodec() {
        if (mediaCodec != null) {
            mediaCodec.stop();
            mediaCodec.release();
            mediaCodec = null;
        }
        if (audioCodec != null) {
            audioCodec.stop();
            audioCodec.release();
            audioCodec = null;
        }
        if (fileOutputStream != null) {
            try {
                fileOutputStream.close();
            } catch (IOException e) {
                e.printStackTrace();
            }
            fileOutputStream = null;
        }
    }

    private void encodeImageToH264(Image image) {
        Log.i(TAG, "encodeImageToH264: thread: " + Thread.currentThread().getName());

        try {
            ByteBuffer[] inputBuffers = mediaCodec.getInputBuffers();
            ByteBuffer[] outputBuffers = mediaCodec.getOutputBuffers();

            int inputBufferIndex = mediaCodec.dequeueInputBuffer(10000); // 增加超时时间
            if (inputBufferIndex >= 0) {
                Log.i(TAG, "获取到视频输入缓冲区索引: " + inputBufferIndex);
                ByteBuffer inputBuffer = inputBuffers[inputBufferIndex];
                inputBuffer.clear();

                // 将 YUV 数据填充到 inputBuffer 中
                fillInputBufferWithImageData(inputBuffer, image);

                long presentationTimeUs = System.nanoTime() / 1000;
//                Date date = new Date(presentationTimeUs / 1000);
//                String time = new SimpleDateFormat("HH:mm:ss SSSS", Locale.CHINA).format(date);
//                Log.i(TAG, "encodeImageToH264: " + time);
                mediaCodec.queueInputBuffer(inputBufferIndex, 0, inputBuffer.limit(), presentationTimeUs, 0);
            } else {
                Log.w(TAG, "未获取到有效的输入缓冲区索引");
            }

            int outputBufferIndex = mediaCodec.dequeueOutputBuffer(bufferInfo, 10000); // 增加超时时间
            while (outputBufferIndex >= 0 && isVideoRecording) {
                Log.i(TAG, "获取到视频输出缓冲区索引: " + outputBufferIndex);
                ByteBuffer outputBuffer = outputBuffers[outputBufferIndex];
                byte[] outData = new byte[bufferInfo.size];
                outputBuffer.get(outData);
                outputBuffer.clear();

                // 直接写出 H.264 数据
//                try {
//                    fileOutputStream.write(outData);
//                } catch (IOException e) {
//                    e.printStackTrace();
//                }
                // 将 H.264 数据传递给转换器
                // 初始化视频频编码轨道
                if (!converter.isAddVideoTrack()) {
                    MediaFormat videoFormat = mediaCodec.getOutputFormat();
                    converter.addTrack(videoFormat, true);
                    converter.start();
                }
                converter.encodeToMp4(outData, bufferInfo, true);
                mediaCodec.releaseOutputBuffer(outputBufferIndex, false);
                outputBufferIndex = mediaCodec.dequeueOutputBuffer(bufferInfo, 0);
            }
        } catch (Exception e) {
            Log.e(TAG, "编码视频时发生异常", e);
        }
    }

    private void encodeAudio(byte[] buffer, int length) {
        // 打印当前线程的名称
        Log.i(TAG, "encodeAudio: thread: " + Thread.currentThread().getName());

        try {
            // 获取音频编解码器的输入和输出缓冲区
            ByteBuffer[] inputBuffers = audioCodec.getInputBuffers();
            ByteBuffer[] outputBuffers = audioCodec.getOutputBuffers();

            // 尝试获取一个可用的输入缓冲区索引,等待时间为10秒(10000微秒)
            int inputBufferIndex = audioCodec.dequeueInputBuffer(10000);
            if (inputBufferIndex >= 0) {
                // 成功获取到输入缓冲区索引
                Log.i(TAG, "获取到音频输入缓冲区索引: " + inputBufferIndex);
                ByteBuffer inputBuffer = inputBuffers[inputBufferIndex];
                inputBuffer.clear(); // 清空缓冲区,准备写入新数据

                // 如果输入数据长度超过缓冲区容量,限制写入长度为缓冲区容量
                if (length > inputBuffer.capacity()) {
                    length = inputBuffer.capacity();
                }

                // 将输入数据写入缓冲区
                inputBuffer.put(buffer, 0, length);
                // 将缓冲区排队到编解码器进行编码
                long presentationTimeUs = System.nanoTime() / 1000;
//                Date date = new Date(presentationTimeUs / 1000);
//                String time = new SimpleDateFormat("HH:mm:ss SSSS", Locale.CHINA).format(date);
//                Log.i(TAG, "encodeAudio: " + time);
                audioCodec.queueInputBuffer(inputBufferIndex, 0, length, presentationTimeUs, 0);
            } else {
                // 未获取到有效的输入缓冲区索引
                Log.w(TAG, "未获取到有效的输入缓冲区索引");
            }

            // 尝试获取一个可用的输出缓冲区索引,等待时间为10秒(10000微秒)
            int outputBufferIndex = audioCodec.dequeueOutputBuffer(audioBufferInfo, 10000);
            // 循环处理输出缓冲区,直到没有更多可用的输出缓冲区或者音频录制停止
            while (outputBufferIndex >= 0 && isAudioRecording) {
                Log.i(TAG, "获取到音频输出缓冲区索引: " + outputBufferIndex);
                ByteBuffer outputBuffer = outputBuffers[outputBufferIndex];
                if (audioBufferInfo.size > 0) {
                    // 初始化音频编码轨道(如果尚未初始化)
                    if (!converter.isAddAudioTrack()) {
                        MediaFormat audioFormat = audioCodec.getOutputFormat();
                        converter.addTrack(audioFormat, false);
                        converter.start();
                    }
                    // 设置输出缓冲区的位置和限制
                    outputBuffer.position(audioBufferInfo.offset);
                    outputBuffer.limit(audioBufferInfo.offset + audioBufferInfo.size);
                    // 将输出数据读取到字节数组中
                    byte[] outData = new byte[audioBufferInfo.size];
                    outputBuffer.get(outData);
                    // 将编码后的数据写入MP4文件
                    converter.encodeToMp4(outData, audioBufferInfo, false);
                }
                // 释放输出缓冲区
                audioCodec.releaseOutputBuffer(outputBufferIndex, false);
                // 尝试获取下一个可用的输出缓冲区索引
                outputBufferIndex = audioCodec.dequeueOutputBuffer(audioBufferInfo, 0);
            }
        } catch (Exception e) {
            // 捕捉并记录编码过程中发生的异常
            Log.e(TAG, "编码音频时发生异常", e);
        }
    }


    private void fillInputBufferWithImageData(ByteBuffer inputBuffer, Image image) {
        Image.Plane[] planes = image.getPlanes();
        for (Image.Plane plane : planes) {
            ByteBuffer buffer = plane.getBuffer();
            byte[] bytes = new byte[buffer.remaining()];
            buffer.get(bytes);
            inputBuffer.put(bytes);
        }
    }
}

H264ToMp4Converter.java

public class H264ToMp4Converter {

    private static final String TAG = "H264ToMp4Converter";
    static File directory = Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_DOWNLOADS);
    private static final String OUTPUT_FILE_PATH = new File(directory, "output.mp4").toString();

    private MediaMuxer mediaMuxer;
    private int videoTrackIndex = -1;
    private int audioTrackIndex = -1;
    private boolean isMuxerStarted = false;

    public H264ToMp4Converter() {
        try {
            mediaMuxer = new MediaMuxer(OUTPUT_FILE_PATH, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);
        } catch (IOException e) {
            Log.e(TAG, "Failed to initialize MediaMuxer", e);
        }
    }

    public void stopMuxer() {
        if (isMuxerStarted) {
            mediaMuxer.stop();
            mediaMuxer.release();
            isMuxerStarted = false;
        }
    }

    public void addTrack(MediaFormat format, boolean isVideo) {
        if (isVideo) {
            videoTrackIndex = mediaMuxer.addTrack(format);
            Log.i(TAG, "addTrack: videoTrackIndex:" + videoTrackIndex);
        } else {
            audioTrackIndex = mediaMuxer.addTrack(format);
            Log.i(TAG, "addTrack: audioTrackIndex:" + audioTrackIndex);
        }
    }

    public void start() {
        if (isAddAudioTrack() && isAddVideoTrack()) {
            mediaMuxer.start();
            isMuxerStarted = true;
        }
    }

    public void encodeToMp4(byte[] data, MediaCodec.BufferInfo info, boolean isVideo) {
        if (!isMuxerStarted) {
            return;
        }

        // 确保 presentationTimeUs 递增且正确
        Date date = new Date(info.presentationTimeUs / 1000);
        String time = new SimpleDateFormat("HH:mm:ss SSSS", Locale.CHINA).format(date);
        Log.w(TAG, "presentation time: isVideo: "+isVideo + " time: " + time);

        ByteBuffer byteBuffer = ByteBuffer.wrap(data);
        int trackIndex = isVideo ? videoTrackIndex : audioTrackIndex;
        mediaMuxer.writeSampleData(trackIndex, byteBuffer, info);
    }

    public boolean isMuxerStarted() {
        return isMuxerStarted;
    }

    public boolean isAddVideoTrack() {
        return videoTrackIndex != -1;
    }

    public boolean isAddAudioTrack() {
        return audioTrackIndex != -1;
    }
}

解释

  1. 初始化 ImageReader 并设置监听器
    createCameraPreviewSession 方法中,初始化 ImageReader 并将其添加为摄像头捕获的目标。

  2. 捕捉每一帧视频数据
    onImageAvailableListener 中获取每一帧的 Image 对象,并调用 encodeImageToH264 方法对其进行编码。

  3. 初始化 MediaCodec
    初始化 MediaCodec 编码器,设置编码参数并开始编码。

  4. 编码 YUV 数据为 H.264
    encodeImageToH264 方法中,将 Image 对象的 YUV 数据填充到 MediaCodec 的输入缓冲区,并从输出缓冲区获取编码后的 H.264 数据,写入文件。

通过以上步骤,我们实现了在 Android 上使用 ImageReader 捕捉视频帧,并使用 MediaCodec 将原始 YUV420 数据编码为 H.264 格式。这样我们就可以实时处理和存储摄像头捕获的视频数据。