1. React Native
  2. AI Scanning APIs

React Native

AI Scanning APIs

The Vision SDK supports OCR via cloud-based APIs to extract structured data from various documents like shipping labels, BOLs (Bill of Lading), and item labels. This mode leverages RESTful APIs and requires a valid API key and environment to be configured before making any calls.


πŸ” SDK Configuration for Cloud OCR

Before initiating an OCR scan, you must set the following props: mode ocrMode ocrType onOCRScan onImageCaptured apiKey environment

        import VisionSdkView from 'react-native-vision-sdk'
const SampleComponent = () => {
  const handleImageCaptured = (event) => {
    //event.image = captured image path
    //you can set any loading or meta states here
  }

  const handleOcrScan = (scanResult) => {
    //after the image is captured, upon detection, this event handler will have the scanned data from the document captured
  }


    return (
        <VisionSdkView 
            mode='ocr' 
            ocrMode='cloud'
            ocrType='shipping_label'
            onOCRScan={handleOcrScan}
            onImageCaptured={handleImageCaptured}
            environment='prod'
            apikey='your_generated_api_key'
        />
    )
}

      

πŸ“Œ You can obtain your API key and environment details from cloud.packagex.io.


🧾 Document Types Supported

The Vision SDK cloud OCR supports the following document types:

πŸ“¦ Shipping Labels

Extracts structured data such as tracking numbers, courier names, addresses, etc. More details Here

Here’s an explanation of each parameter in the callScanAPIWith method used to scan Shipping Labels via the Vision SDK's Cloud OCR API:


πŸ“¦ Example Component with requried props for Shipping Label

        import VisionSdkView from 'react-native-vision-sdk'
const ShippingLabelExample = () => {
  const handleImageCaptured = (event) => {
    //event.image = captured image path
    //you can set any loading or meta states here
  }

  const handleOcrScan = (scanResult) => {
    //after the image is captured, upon detection, this event handler will have the scanned data from the document captured
  }


    return (
        <VisionSdkView 
            mode='ocr' 
            ocrMode='cloud'
            ocrType='shipping_label' // this prop defines the document type
            onOCRScan={handleOcrScan}
            onImageCaptured={handleImageCaptured}
            environment='prod'
            apikey='your_generated_api_key'
            shouldResizeImage={true} // this prop enables or disables image compression for faster processing
        />
    )
}

      

πŸ“„ Bills of Lading (BOL)

Detects and extracts BOL-specific fields such as carrier information, consignee details, and reference numbers. More details Here

πŸ“„ Example Component with requried props for Bill of Lading (BOL)

        import VisionSdkView from 'react-native-vision-sdk'
const BOLExample = () => {
  const handleImageCaptured = (event) => {
    //event.image = captured image path
    //you can set any loading or meta states here
  }

  const handleOcrScan = (scanResult) => {
    //after the image is captured, upon detection, this event handler will have the scanned data from the document captured
  }


    return (
        <VisionSdkView 
            mode='ocr' 
            ocrMode='cloud'
            ocrType='bill_of_lading' // this prop defines the document type
            onOCRScan={handleOcrScan}
            onImageCaptured={handleImageCaptured}
            environment='prod'
            apikey='your_generated_api_key'
            shouldResizeImage={true} // this prop enables or disables image compression for faster processing
        />
    )
}

      

🏷️ Item Labels

Recognizes SKU, GTIN, price, brand, and other item-level information printed on labels. More details Here

🏷️ Example Component with requried props for Item Labels

        import VisionSdkView from 'react-native-vision-sdk'
const BOLExample = () => {
  const handleImageCaptured = (event) => {
    //event.image = captured image path
    //you can set any loading or meta states here
  }

  const handleOcrScan = (scanResult) => {
    //after the image is captured, upon detection, this event handler will have the scanned data from the document captured
  }


    return (
        <VisionSdkView 
            mode='ocr' 
            ocrMode='cloud'
            ocrType='item_label' // this prop defines the document type
            onOCRScan={handleOcrScan}
            onImageCaptured={handleImageCaptured}
            environment='prod'
            apikey='your_generated_api_key'
            shouldResizeImage={true} // this prop enables or disables image compression for faster processing
        />
    )
}

      

Camera Guidelines (Image Sharpness Score)

When the camera is operating in OCR or Photo mode, you receive continuous feedback on the sharpness score of each frame.

The sharpness score is a floating-point value ranging from 0.0 to 1.0, where:

  • 1.0 indicates a perfectly sharp image suitable for OCR processing.
  • 0.0 indicates a completely blurry image.

Use this score to provide live feedback to users or decide whether to proceed with image capture.

Event

Event Name Description Payload
onSharpnessScoreUpdate Triggered continuously while the camera preview is active. { sharpnessScore: number }

Example Usage

Below is a sample React Native component demonstrating how to listen for sharpness score updates and capture images with their corresponding sharpness values.

        import React, { useState, useCallback } from 'react';
import { View, Text, StyleSheet, Button } from 'react-native';
import { VisionCamera } from 'react-native-vision-sdk';

export default function SharpnessScoreDemo() {
  const [sharpness, setSharpness] = useState<number | null>(null);
  const [lastCapture, setLastCapture] = useState<{ uri?: string; sharpness?: number } | null>(null);

  // Called continuously while preview is active
  const handleSharpnessUpdate = useCallback(({ sharpnessScore }) => {
    setSharpness(sharpnessScore);
  }, []);

  // Called after a capture
  const handleCapture = useCallback((event) => {
    setLastCapture({
      uri: event.image?.path,
      sharpness: event.sharpnessScore,
    });
  }, []);

  return (
    <View style={styles.container}>
      <Text style={styles.title}>Sharpness Score Demo</Text>
      <VisionCamera
        style={styles.camera}
        onSharpnessScoreUpdate={handleSharpnessUpdate}
        onCapture={handleCapture}
      />
      <View style={styles.infoBox}>
        <Text style={styles.label}>
          Live Sharpness: {sharpness !== null ? sharpness.toFixed(3) : '--'}
        </Text>
        <Button title="Capture" onPress={() => VisionCamera.capture()} />
        {lastCapture && (
          <View style={styles.result}>
            <Text>Last Image: {lastCapture.uri || '(no URI)'}</Text>
            <Text>Sharpness Score: {lastCapture.sharpness?.toFixed(3)}</Text>
          </View>
        )}
      </View>
    </View>
  );
}

const styles = StyleSheet.create({
  container: { flex: 1, backgroundColor: '#000' },
  camera: { flex: 1 },
  title: { color: '#fff', fontSize: 20, textAlign: 'center', marginTop: 40 },
  infoBox: { padding: 16, backgroundColor: '#111' },
  label: { color: '#0f0', fontSize: 18, marginBottom: 8 },
  result: { marginTop: 12, backgroundColor: '#222', padding: 8, borderRadius: 4 },
});