For new parents, few things can be more frustrating than trying to figure out why a baby is crying, as well as what to do about it.
While it’s tempting to consider such frustration an eternal rite of passage, a group of U.S researchers has devised a new artificial intelligence method that can identify and distinguish between normal cry signals and abnormal ones, such as those resulting from an underlying illness. The method, based on a cry language recognition algorithm, could be useful to parents at home as well as in healthcare settings, as doctors may use it to discern cries among sick children.
The research was published in the May issue of IEEE/CAA Journal of Automatica Sinica (JAS), a joint publication of the IEEE and the Chinese Association of Automation. As the report sums up the project’s goals, “analyzing infant cries provides a non-invasive diagnostic of the condition of the infant without using invasive tests. Using an infant's cry as a diagnostic tool plays an important role in various situations: tackling medical problems in which there is currently no diagnostic tool available (e.g. sudden infant death syndrome (SIDS), problems in developmental outcome and colic), tackling medical problems in which early detection is possible only by invasive procedures (e.g. chromosomal abnormalities), and finally tackling medical problems which may be readily identified but would benefit from an improved ability to define prognosis, (e.g. prognosis of long term developmental outcome in cases of prematurity and drug exposure).”
The new research uses a specific algorithm based on automatic speech recognition to detect and recognize the features of infant cries, thus enabling users to distinguish the meanings of both normal and abnormal cry signals in a noisy environment. The algorithm is independent of the individual crier, meaning that it can be used in a broader sense in practical scenarios as a way to recognize and classify various cry features and better understand why babies are crying and how urgent the cries are.
"Like a special language, there is lots of health-related information in various cry sounds,” explained Lichuan Liu, corresponding author and Associate Professor of Electrical Engineering and the Director of Digital Signal Processing Laboratory, whose group conducted the research. “The differences between sound signals actually carry the information. These differences are represented by different features of the cry signals. To recognize and leverage the information, we have to extract the features and then obtain the information in it.”
The researchers hope that the findings of their study could be applicable to several other medical care circumstances in which decision making relies heavily on experience.