A Unicef report estimates that 50% of low birth weight babies fall through the cracks and are never identified. This comes up to about 10 million babies per year. Low birth weight babies (LBW) are prone to grow up with several developmental disorders. But if one can identify these babies on time, their quality of life can be improved by providing timely care.
At Wadhwani Institute for Artificial Intelligence (Wadhwani AI), we believe in empowering the ASHA (Accredited social health activist) workers who care for these babies and their mothers. But first, the ASHAs need to find the babies and weigh them accurately. Our answer is an AI-based anthropometry tool, which uses state of the art reconstruction methods to recover 3D pose and shape of the babies. This helps the healthcare worker measure medically important criteria such as weight, height, chest circumference, head circumference and arm circumference to identify at-risk neonates.
Our approach involves recovering a 3D mesh (parameterized by pose and shape) of the baby from a monocular RGB video. We use a custom deformable neonate model to compress the representation of the mesh to 92 dimensions. Once we recover the 3D mesh, calculating measurements is straightforward. Given that babies have constant density, weight can be calculated using the estimated volume. We built the deformable neonate model from 3D scans we collected. To train our reconstruction algorithms we use a combination of 2D and 3D, synthetic and real data, and the deformable neonate model that enables the compact representation of the 3D mesh.
How do we do this?
The first question we tried to answer — can we collect large amounts of 3D data? Collecting accurate 3D data is hard and expensive. It typically involves a large immobile multi-camera system and requires a trained technician to operate. Hence, collecting large datasets with 3D ground truth is not feasible. We took the route of creating a large 2D dataset and using that as a proxy to help our models learn 3D reconstruction. Recent research in 3D reconstruction on adults has shown that it is possible to obtain fairly accurate reconstructions using large amounts of 2D data in tandem with small amounts of 3D data.
But even a large number of 2D video datasets pertinent to our problem are difficult to come by and, to our knowledge, no such dataset exists. So we started a large scale data collection exercise spanning four states across India with appropriate ethics committee approvals. Our data collectors are a mix of ASHAs, ANMs (Auxiliary Nurse Midwives) and nurses collecting videos from homes, primary healthcare centres and hospitals respectively, spanning around 50 locations in the country. They undergo appropriate training and periodic retraining to ensure a minimal bar on data quality.
Collecting data
We collect 2D data in the form of a video using a generic low-cost smartphone accessible to our data collectors and for each baby, record height, weight, chest circumference, head circumference and arm circumference which represent the target variables we wish to estimate during the evaluation.
A baby is placed on a flat surface with a reference object next to the baby. We ensure that we capture different profiles of the babies (including extreme side angles) by moving the camera in an arc-like motion above the baby. This entire process is encapsulated in 10-15 second videos.
We then obtain manual keypoint and segmentation mask annotations (that are proxies for 3D pose and shape) for the videos. Our researchers train a team who then annotate frames sampled from the video and use the annotations to interpolate to the other frames.
We use the keypoints and segmentation masks by employing a re-projection loss where we project the 3D predictions back on to 2D. We then devise loss functions to compare the ground truth annotations with the re-projected 3D keypoints and segmentation masks. This re-projection loss is what allows us to build our algorithm with limited 3D scans while using the 2D videos as a proxy.
This dataset has now become the bedrock for all our research in anthropometry.