Trasforma output
Mantieni tutto organizzato con le raccolte
Salva e classifica i contenuti in base alle tue preferenze.
L'output di un caso d'uso di CameraX è duplice: il buffer e le informazioni
di trasformazione. Il buffer è un array di byte e le informazioni di trasformazione indicano come il buffer
deve essere ritagliato e ruotato prima di essere mostrato agli utenti finali. La modalità di applicazione della
trasformazione dipende dal formato del buffer.
ImageCapture
Per il caso d'uso ImageCapture
, il buffer del rettangolo di ritaglio viene applicato prima del salvataggio
su disco e la rotazione viene salvata nei dati Exif. Non è richiesta alcuna azione aggiuntiva da parte dell'app.
Anteprima
Per il caso d'uso Preview
, puoi ottenere le informazioni sulla trasformazione chiamando
SurfaceRequest.setTransformationInfoListener()
.
Ogni volta che la trasformazione viene aggiornata, il chiamante riceve un nuovo oggetto
SurfaceRequest.TransformationInfo
.
La modalità di applicazione delle informazioni sulla trasformazione dipende dall'origine del
Surface
ed è in genere non banale. Se l'obiettivo è semplicemente visualizzare l'anteprima, utilizza PreviewView
. PreviewView
è una visualizzazione personalizzata che gestisce automaticamente
la trasformazione. Per utilizzi avanzati, quando devi modificare il flusso di anteprima, ad esempio con OpenGL, consulta l'esempio di codice nell'app di test principale di CameraX.
Trasformare le coordinate
Un'altra attività comune è lavorare con le coordinate anziché con il buffer, ad esempio disegnare un riquadro intorno al volto rilevato nell'anteprima. In casi come questo, devi
trasformare le coordinate del volto rilevato dall'analisi dell'immagine in
anteprima.
Il seguente snippet di codice crea una matrice che mappa le coordinate dell'analisi delle immagini
alle coordinate PreviewView
. Per trasformare le coordinate (x, y)
con una Matrix
, vedi
Matrix.mapPoints()
.
Kotlin
fun getCorrectionMatrix(imageProxy: ImageProxy, previewView: PreviewView) : Matrix {
val cropRect = imageProxy.cropRect
val rotationDegrees = imageProxy.imageInfo.rotationDegrees
val matrix = Matrix()
// A float array of the source vertices (crop rect) in clockwise order.
val source = floatArrayOf(
cropRect.left.toFloat(),
cropRect.top.toFloat(),
cropRect.right.toFloat(),
cropRect.top.toFloat(),
cropRect.right.toFloat(),
cropRect.bottom.toFloat(),
cropRect.left.toFloat(),
cropRect.bottom.toFloat()
)
// A float array of the destination vertices in clockwise order.
val destination = floatArrayOf(
0f,
0f,
previewView.width.toFloat(),
0f,
previewView.width.toFloat(),
previewView.height.toFloat(),
0f,
previewView.height.toFloat()
)
// The destination vertexes need to be shifted based on rotation degrees. The
// rotation degree represents the clockwise rotation needed to correct the image.
// Each vertex is represented by 2 float numbers in the vertices array.
val vertexSize = 2
// The destination needs to be shifted 1 vertex for every 90° rotation.
val shiftOffset = rotationDegrees / 90 * vertexSize;
val tempArray = destination.clone()
for (toIndex in source.indices) {
val fromIndex = (toIndex + shiftOffset) % source.size
destination[toIndex] = tempArray[fromIndex]
}
matrix.setPolyToPoly(source, 0, destination, 0, 4)
return matrix
}
Java
Matrix getMappingMatrix(ImageProxy imageProxy, PreviewView previewView) {
Rect cropRect = imageProxy.getCropRect();
int rotationDegrees = imageProxy.getImageInfo().getRotationDegrees();
Matrix matrix = new Matrix();
// A float array of the source vertices (crop rect) in clockwise order.
float[] source = {
cropRect.left,
cropRect.top,
cropRect.right,
cropRect.top,
cropRect.right,
cropRect.bottom,
cropRect.left,
cropRect.bottom
};
// A float array of the destination vertices in clockwise order.
float[] destination = {
0f,
0f,
previewView.getWidth(),
0f,
previewView.getWidth(),
previewView.getHeight(),
0f,
previewView.getHeight()
};
// The destination vertexes need to be shifted based on rotation degrees.
// The rotation degree represents the clockwise rotation needed to correct
// the image.
// Each vertex is represented by 2 float numbers in the vertices array.
int vertexSize = 2;
// The destination needs to be shifted 1 vertex for every 90° rotation.
int shiftOffset = rotationDegrees / 90 * vertexSize;
float[] tempArray = destination.clone();
for (int toIndex = 0; toIndex < source.length; toIndex++) {
int fromIndex = (toIndex + shiftOffset) % source.length;
destination[toIndex] = tempArray[fromIndex];
}
matrix.setPolyToPoly(source, 0, destination, 0, 4);
return matrix;
}
I campioni di contenuti e codice in questa pagina sono soggetti alle licenze descritte nella Licenza per i contenuti. Java e OpenJDK sono marchi o marchi registrati di Oracle e/o delle sue società consociate.
Ultimo aggiornamento 2025-08-27 UTC.
[null,null,["Ultimo aggiornamento 2025-08-27 UTC."],[],[],null,["# Transform output\n\nThe output of a CameraX use case is twofold: the buffer and the transformation\ninfo. The buffer is a byte array and the transformation info is how the buffer\nshould be cropped and rotated before being shown to end users. How to apply the\ntransformation depends on the format of the buffer.\n\nImageCapture\n------------\n\nFor the `ImageCapture` use case, the crop rect buffer is applied before saving\nto disk and the rotation is saved in the Exif data. There is no additional\naction needed from the app.\n\nPreview\n-------\n\nFor the `Preview` use case, you can get the transformation information by\ncalling\n[`SurfaceRequest.setTransformationInfoListener()`](/reference/androidx/camera/core/SurfaceRequest#setTransformationInfoListener(java.util.concurrent.Executor,%20androidx.camera.core.SurfaceRequest.TransformationInfoListener)).\nEvery time the transformation is updated, the caller receives a new\n[`SurfaceRequest.TransformationInfo`](/reference/androidx/camera/core/SurfaceRequest.TransformationInfo)\nobject.\n\nHow to apply the transformation information depends on the source of the\n`Surface`, and is usually non-trivial. If the goal is to simply display the\npreview, use `PreviewView`. `PreviewView` is a custom view that automatically\nhandles transformation. For advanced uses, when you need to edit the preview\nstream, such as with OpenGL, look at the code sample in the [CameraX core test\napp](https://android.googlesource.com/platform/frameworks/support/+/refs/heads/androidx-main/camera/integration-tests/coretestapp/src/main/java/androidx/camera/integration/core).\n\nTransform coordinates\n---------------------\n\nAnother common task is to work with the coordinates instead of the buffer, such\nas drawing a box around the detected face in preview. In cases such as this, you\nneed to transform the coordinates of the detected face from image analysis to\npreview.\n\nThe following code snippet creates a matrix that maps from image analysis\ncoordinates to `PreviewView` coordinates. To transform the (x, y) coordinates\nwith a [`Matrix`](/reference/android/graphics/Matrix), see\n[`Matrix.mapPoints()`](/reference/android/graphics/Matrix#mapPoints(float%5B%5D)). \n\n### Kotlin\n\n```kotlin\nfun getCorrectionMatrix(imageProxy: ImageProxy, previewView: PreviewView) : Matrix {\n val cropRect = imageProxy.cropRect\n val rotationDegrees = imageProxy.imageInfo.rotationDegrees\n val matrix = Matrix()\n\n // A float array of the source vertices (crop rect) in clockwise order.\n val source = floatArrayOf(\n cropRect.left.toFloat(),\n cropRect.top.toFloat(),\n cropRect.right.toFloat(),\n cropRect.top.toFloat(),\n cropRect.right.toFloat(),\n cropRect.bottom.toFloat(),\n cropRect.left.toFloat(),\n cropRect.bottom.toFloat()\n )\n\n // A float array of the destination vertices in clockwise order.\n val destination = floatArrayOf(\n 0f,\n 0f,\n previewView.width.toFloat(),\n 0f,\n previewView.width.toFloat(),\n previewView.height.toFloat(),\n 0f,\n previewView.height.toFloat()\n )\n\n // The destination vertexes need to be shifted based on rotation degrees. The\n // rotation degree represents the clockwise rotation needed to correct the image.\n\n // Each vertex is represented by 2 float numbers in the vertices array.\n val vertexSize = 2\n // The destination needs to be shifted 1 vertex for every 90° rotation.\n val shiftOffset = rotationDegrees / 90 * vertexSize;\n val tempArray = destination.clone()\n for (toIndex in source.indices) {\n val fromIndex = (toIndex + shiftOffset) % source.size\n destination[toIndex] = tempArray[fromIndex]\n }\n matrix.setPolyToPoly(source, 0, destination, 0, 4)\n return matrix\n}\n```\n\n### Java\n\n```java\nMatrix getMappingMatrix(ImageProxy imageProxy, PreviewView previewView) {\n Rect cropRect = imageProxy.getCropRect();\n int rotationDegrees = imageProxy.getImageInfo().getRotationDegrees();\n Matrix matrix = new Matrix();\n\n // A float array of the source vertices (crop rect) in clockwise order.\n float[] source = {\n cropRect.left,\n cropRect.top,\n cropRect.right,\n cropRect.top,\n cropRect.right,\n cropRect.bottom,\n cropRect.left,\n cropRect.bottom\n };\n\n // A float array of the destination vertices in clockwise order.\n float[] destination = {\n 0f,\n 0f,\n previewView.getWidth(),\n 0f,\n previewView.getWidth(),\n previewView.getHeight(),\n 0f,\n previewView.getHeight()\n };\n\n // The destination vertexes need to be shifted based on rotation degrees.\n // The rotation degree represents the clockwise rotation needed to correct\n // the image.\n\n // Each vertex is represented by 2 float numbers in the vertices array.\n int vertexSize = 2;\n // The destination needs to be shifted 1 vertex for every 90° rotation.\n int shiftOffset = rotationDegrees / 90 * vertexSize;\n float[] tempArray = destination.clone();\n for (int toIndex = 0; toIndex \u003c source.length; toIndex++) {\n int fromIndex = (toIndex + shiftOffset) % source.length;\n destination[toIndex] = tempArray[fromIndex];\n }\n matrix.setPolyToPoly(source, 0, destination, 0, 4);\n return matrix;\n}\n```"]]