In this guide, we will explore the Document Scanner features of the Dynamsoft Capture Vision SDK.
Run the following commands in the root directory of your react-native project to add dynamsoft-capture-vision-react-native into dependencies
# using npm
npm install dynamsoft-capture-vision-react-native
# OR using Yarn
yarn add dynamsoft-capture-vision-react-native
then run the command to install all dependencies:
# using npm
npm install
# OR using Yarn
yarn install
For iOS, you must install the necessary native frameworks from CocoaPods by running the pod install command as below:
cd ios
pod install
The Dynamsoft Capture Vision SDK needs the camera permission to use the camera device, so it can capture from video stream.
For Android, we have defined camera permission within the SDK, you don’t need to do anything.
For iOS, you need to include the camera permission in ios/your-project-name/Info.plist inside the <dict> element:
<key>NSCameraUsageDescription</key>
<string></string>
Now that the package is added, it’s time to start building the document scanner component using the SDK.
The first step in code configuration is to initialize a valid license via LicenseManager.initLicense.
```typescript jsx import {LicenseManager} from ‘dynamsoft-capture-vision-react-native’;
LicenseManager.initLicense(“your-license-key”) .then(()=>{/Init license successfully./}) .catch(error => console.error(“Init License failed.”, error));
> [!NOTE]
>
>- The license string here grants a time-limited free trial which requires network connection to work.
>- You can request a 30-day trial license via the [Request a Trial License](https://www.dynamsoft.com/customer/license/trialLicense?product=dcv&utm_source=guide&package=mobile) link.
## Request Camera Permission
Before opening camera to start document scanning, you need to request camera permission from system.
```typescript jsx
import {CameraEnhancer} from 'dynamsoft-capture-vision-react-native';
CameraEnhancer.requestCameraPermission();
The basic workflow of scanning a document from video stream is as follows:
CameraEnhancer objectCaptureVisionRouter objectCameraEnhancer object to the CaptureVisionRouter objectCapturedResultReceiver object to listen for scanned document via the callback function onProcessedDocumentResultReceivedstartCapturing```typescript jsx import React, {useEffect, useRef, useState} from ‘react’; import { CameraEnhancer, CameraView, RecognizedTextLinesResult, CaptureVisionRouter, EnumPresetTemplate, ParsedResult, ProcessedDocumentResult, imageDataToBase64 } from ‘dynamsoft-capture-vision-react-native’;
export function Scanner() {
const cameraView = useRef
useEffect(() => { router.setInput(camera); //Bind the CaptureVisionRouter and ImageSourceAdapter before router.startCapturing() camera.setCameraView(cameraView.current!!); //Bind the CameraEnhancer and CameraView before camera.open()
/**
* Adds a CapturedResultReceiver object to listen the captured result.
* In this sample, we only listen onProcessedDocumentResultReceived generated by Dynamsoft Document Normalizer module.
* */
let resultReceiver = router.addResultReceiver({
//If start capturing with EnumPresetTemplate.PT_DETECT_AND_NORMALIZE_DOCUMENT,
//ProcessedDocumentResult will be received on this callback.
onProcessedDocumentResultReceived: (result: ProcessedDocumentResult) => {
//Handle the `result`.
if (result.deskewedImageResultItems && result.deskewedImageResultItems.length > 0) {
let deskewedImageBase64 = imageDataToBase64(result.deskewedImageResultItems[0].imageData)
//...
}
},
});
/**
* Open the camera when the component is mounted.
* Please remember to request camera permission before it.
* */
camera.open();
/**
* Start capturing when the component is mounted.
* In this sample codes, we start capturing by using EnumPresetTemplate.PT_DETECT_AND_NORMALIZE_DOCUMENT template.
* */
router.startCapturing(EnumPresetTemplate.PT_DETECT_AND_NORMALIZE_DOCUMENT);
return () => {
//Remove the receiver when the component is unmounted.
router.removeResultReceiver(resultReceiver);
//Close the camera when the component is unmounted.
camera.close();
//Stop capturing when the component is unmounted.
router.stopCapturing();
} }, [camera, router, cameraView]);
return ( <CameraView style= ref={cameraView}> {/* you can add your own view here as the children view of CameraView */} </CameraView> ); }
## Customize the Document Scanner
If you want to detect document boundary and adjust the boundary manually, you can `startCapturing` with `EnumPresetTemplate.PT_DETECT_DOCUMENT_BOUNDARIES` template.
The `ProcessedDocumentResult` will then be received through the `onProcessedDocumentResultReceived` callback.
You can use the [Editor component](/capture-vision-react-native-samples/ScanDocument/src/Editor.tsx) to learn how to draw `ProcessedDocumentResult.detectedQuadResultItems` on the original image and interactively edit the quads.
## Run the Project
Go to your project folder, open a _new_ terminal and run the following command:
### For Android
```bash
# using npm
npm run android
# OR using Yarn
yarn android
*.xcworkspace (not .xcodeproj) from the ios directory in Xcode.# using npm
npm run ios
# OR using Yarn
yarn ios
If everything is set up correctly, you should see your new app running on your device. This is one way to run your app — you can also run it directly from within Android Studio and Xcode respectively.
[!NOTE] If you want to run Android via
Windows, You may encounter some build errors due to theWindows Maximum Path Length Limitation. Therefore, we recommend that you move the project to a directory with a shorter path.
The full sample code is available here.
How to enable new architecture in Android
How to enable new architecture in iOS
https://www.dynamsoft.com/company/contact/