Camera, Barcodes, and On-Device ML with VisionCamera
John Hambardzumian · Full Stack & Mobile Developer | Node.js, React Native, PHP, Laravel | 7+ Years Building Scalable Web & Mobile AppsApr 11, 20265 min readCamera-based workflows—QR login, retail scanning, and augmented overlays—require low-latency preview, configurable resolution profiles, and optional frame processors that analyze buffers before display. Legacy react-native-camera maintenance has largely shifted toward VisionCamera for modern APIs and extensibility.
Permissions and UX copy
Camera access is sensitive—provide pre-permission education screens explaining value. Handle denied and restricted states with deep links to Settings. On Android, scoped storage and manufacturer-specific camera HALs introduce fragmentation—test Samsung and Xiaomi devices.
Frame processors and ML
Frame processors run worklets per frame; integrate ML Kit barcode or text recognition with care for thermal budgets. Downscale frames before inference when accuracy allows.
Orientation and preview sizing
Mismatch between sensor orientation and UI layout causes stretched previews. Lock orientation explicitly for scanning flows when product allows.
Performance
High-resolution capture increases memory bandwidth. Choose photo versus video priorities per screen—do not leave 4K preview enabled unnecessarily.
Privacy
If frames leave device for cloud inference, disclose in privacy policy and minimize retention. Prefer on-device models when feasible.
Takeaways
Camera features are hardware-dependent; maintain a device lab matrix and track crash reports segmented by manufacturer.

Written by John Hambardzumian
Full Stack & Mobile Developer | Node.js, React Native, PHP, Laravel | 7+ Years Building Scalable Web & Mobile Apps. Focused on React Native and full-stack development.