Google’s on-device ML Kit introduced another useful API for Entity Extraction, which extracts the entities from the static text or while typing based on that once an entity is extracted you can easily enable different actions for the user based on the entity type.
The Entity Extraction API supports multiple languages like English, Arabic, French, German rest of the list you can check from here.
The following Supported entities included are:
Google’s on-device ML Kit has introduced another API, on-device Translation, in which you can dynamically translate text into different languages. The API has broad language support and covers more than 50 languages. Check the complete list of languages it supports here.
Another benefit of this API is that once you have downloaded the translated model you can use it when you are in offline mode.
In this article, we’ll do just that on Android with the help of Google ML Kit’s Translation API.
Google’s on-device ML Kit introduced another API for Image Labeling in which you can extract and detect information from an image using a broad group of categories. The Image labeling default model identifies objects, places, activities, animal species, products, and more.
The API default base model image labeling model recognizes more than 400 entities that cover the most commonly found concepts in photos. In this article, we’ll make an image labelling model on Android with the help of Google ML Kit’s Image Labeling API.
Google’s on-device ML Kit recently introduced another API for Pose detection which is currently in beta. It is a lightweight API that is able to detect the pose from static images or from continuous video in real-time.
A pose describes the person's body position with the help of pose landmarks. The landmarks are body parts such as shoulders, nose, etc.
In this article, we’ll test out pose detection on Android with the help of Google ML Kit’s Pose Detection API.
ML Kit is a cross-platform mobile SDK (Android and iOS) developed by Google that allows developers to easily access on-device…
Google’s on-device ML Kit recently introduced another API for barcode scanning. It can read and scan almost a dozen different types of barcodes including Codabar, Code 39, Code 93, EAN-8, EAN-13, QR code, PDF417, and more.
The barcode scanning API also tells you the scanned format and it will automatically detect the formats. The other main feature of this API is that you can use it in your application without a network connection and it will support the different orientation.
In this article, we’ll do just that on Android with the help of Google ML Kit’s Barcode Scanning API.
By now, you are probably familiar with the concept of “smart reply” features. In many messaging applications nowadays, you’ll see a list of suggested replies when you receive a message. And this makes it easier to respond without thinking about what you’re going to say—especially when the reply is obvious.
You can easily add Smart Reply to your mobile apps. like chat-app, Reviews, with the help of Google’s API, via its on-device machine learning framework ML Kit, which allows you to easily add this capability to your projects.
In this article, we’ll do just that on Android with the help…
Creating accurate machine learning models capable of identifying multiple faces (or other target objects) in a single image remains a core challenge in computer vision, but now with the advancement of deep learning and computer vision models for mobile, you can much more easily detect faces in an image.
In this article, we’ll do just that on Android with the help of Google ML Kit’s Face Detection API.
Face detection is a computer vision task with which you can detect faces in an image—it also detects the various part of the face, known as landmarks. …
Shake is a bug reporting SDK for iOS and Android apps. It’s a powerful tool that easily compares to other bug reporting tools like Instabug or Crashlytics. In this article, I’ll explore some of the main features of the Shake SDK, it’s benefits, and how you can integrate it into your Android Project.
With Shake SDK added to your app, your testers can report any bug in under 5 seconds. It will take all the extra info you as a developer need and send it to the Shake web dashboard:
Shake SDK automatically captures the following information:
Nowadays, language detection is very popular (especially with machine learning), and mobile apps that use it are widely popular in every part of the world, with different users speaking different languages. Language identification can easily help you understand your users’ languages and personalize your app based on them.
Language detection is essentially a technique/science that allows us to automatically identify the language of a given text, be it English, Chinese, or many others. We can use machine learning for this kind of identification —and we can even do this inside mobile apps!
In this article, we’ll do just that, with…
Firebase Crashlytics is a real-time crash reporting tool. It helps by automatically collecting, analyzing, and organizing your crash reports.
It also helps you understand which issues are most important, so you can priorities those first and keep your users happy. It combines a real-time dashboard and alerts to keep you aware of your newest issues and any sudden changes instability.
The first step we need to take is to create a Firebase project and add it to our Android app. At this point, I’ve already created a Firebase project, added it to a new project, and also included the Google…