You can also use CameraImage.FormatSupported to test a texture format before calling one of the conversion methods. The following formats are currently supported The outputDimensions must be less than or equal to the inputRect's dimensions (no upsampling). This can decrease the time it takes to perform a color conversion. For example, you could supply (inputRect.width / 2, inputRect.height / 2) to get a half resolution image. The CameraImage converter supports downsampling (using nearest neighbor), allowing you to specify a smaller output image than the inputRect.width and inputRect.height parameters. It can be significantly faster convert a sub rectangle of the original image if know which part of the image you need. The inputRect must fit completely inside the original image. This can be the full image or some sub rectangle of the image. The portion of the CameraImage to convert. Public CameraImageTransformation transformation Let's look at CameraImageConversionParams in more detail: public struct CameraImageConversionParams public void Convert(CameraImageConversionParams conversionParams, IntPtr destinationBuffer, int bufferLength) Grayscale images ( TextureFormat.Alpha8 and TextureFormat.R8) are typically very fast, while color conversions require CPU intensive computations. This method converts the CameraImage into the TextureFormat specified by conversionParams and writes the data to the buffer at destinationBuffer. This section convers the synchronous version. CameraImage provides both synchronous and asynchronous conversion methods. ![]() To obtain grayscale or color versions of the camera image, the raw plane data needs to be converted. Synchronously Convert to Grayscale and Color You should consider the memory read-only. This represents a "view" into the native memory you do not need to dispose the NativeArray, and the data is only valid until the CameraImage is disposed. If (!cameraManager.TryGetLatestImage(out image))įor (int planeIndex = 0 planeIndex. If you need access to the raw, platform-specific YUV data, you can get each image "plane" with the method CameraImage.GetPlane. U and V may be interleaved or separate planes, and there could be additional padding per pixel or per row. Most video formats use a YUV encoding variant, where Y is the luminance plane, and the UV plane(s) contain chromaticity information. Note: An image "plane" in this context refers to a channel used in the video format (not a planar surface and not related to an ARPlane).
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |