Bibliography
Regenie, P. (2018, 9 17). www.techopedia.com. Retrieved from How AI in Health Care Is Identifying Risks & Saving Money: https://www.techopedia.com/how-ai-in-health-care-is-identifying-risks-saving-money/2/33498
Takeaway: While there may be a belief that AI is expensive to implement, the amount of money it can save, and the improved level of patient care, can make up for it.
Pattern matching and predicting an exigent need in hospitals is a difficult task for skilled medical staffs, but not for AI and machine learning. Medical staffs do not have the luxury of observing each of their patients on a full-time basis. Although incredibly good at identifying the immediate needs of patients in obvious circumstances, nurses and medical staffs do not possess the capabilities of discerning the future from a complex array of patient symptoms exhibited over a reasonable period. Machine learning has the luxury of not only observing and analyzing patient data 24/7, but also combining information collected from multiple sources, i.e. historical records, daily evaluations by medical staff, and real-time measurements of vitals such as heart rate, oxygen usage and blood pressure. The application of AI in the assessment and prediction of imminent heart attacks, falls, strokes, sepsis and complications is currently underway all over the world
A real-world example is how El Camino Hospital linked EHR, bed alarm and nurse call light data to analytics to identify patients at high risk of falls. El Camino Hospital reduced falls, a major cost to hospitals, by 39%.
The machine learning methodologies used by El Camino are the tip of the iceberg, but significantly represent the future of health care using action-focused insights or prescription analytics. They are using a small subset of the potential information available and the physical actions taken by the patient such as exiting the bed and pushing the help button in conjunction with health records – a periodic measurement by hospital staff. Hospital machinery is currently not feeding significant data from cardiac monitors, respiration monitors, oxygen saturation monitors, ECGs and cameras into big data storage devices with event identification.
Integrating AI solutions with current hospital systems is an economic, political and technical problem. The purpose of the remainder of this article is to discuss the technical problems, which can be broken down into the following functions:
Get the data
Clean the data
Transport the data
Analyze the data
Notify the stakeholders
Getting and cleaning data is a challenging aspect of all AI implementations. A decent reference starting point for understanding the resources needed to access a typical EHR like Epic data is in this article on How To Integrate With Epic.
Feed Data in Real Time to Big Data
We are doing predictive analytics – not real-time alarming. These are uniquely different problems. Real-time predictive analytics can drop streaming data, not event data. Event data are identifier tags that bookend events. Events are heart rate per period of time or oxygen saturation at a specific interval. Streaming data is each heartbeat or pulse oxygen reading. This is very important because a data guarantee is expensive in terms of performance. We must guarantee events – there are a limited number of these – we must not guarantee data.
EHR, nurse call and patient monitoring data all need to be associated with a patient at every point in time. This means a unique identifier that is shared between all systems and easily implemented such as a UUID (universally unique identifier). From an implementation perspective cameras with built-in bar code readers that scan the environment integrate a lot of functional requirements necessary for comprehensive implementations. A well-implemented system can scan bed bar codes, patient wristband bar codes, prescription bar codes and intravenous bar codes while assigning a unique UUID on every patient bed change. Current hospital technologies include nurse scanners for patient wristband bar codes.
Our goal is to write geospatial time series data in real time for big data storage. The most significant lag time is in the write to the database, so we must asynchronously queue data somewhere, and the best method of doing that is by using a messaging platform such as RabbitMQ or Kafka. RabbitMQ can handle 1 million messages per second and Kafka can handle up to 60 million per second. RabbitMQ guarantees the data, Kafka does not. The basic strategy becomes publishing data to exchanges that have the necessary characteristics for your needs. (Amazon is trying to use big data to lower health care costs. Learn more in Amazon Health Care Plans - A True Market Revolution?)
Labeling Events for Better Machine Learning
The most efficient machine learning algorithms are those with clearly defined data sets and labels. Excellent, well-known algorithms are used to identify cancer and read X-rays. The article written by Alexander Gelfand, Deep Learning and the Future of Biomedical Image Analysis, points out that data labeling is critical to the success of machine learning. In addition to the labeling, it is very important to bookend the geospatial time series data in well-defined, consistent chunks referencing the labeled event. Well-defined, consistent labels are used as selection criteria.
Clean Data Before Shipping (Ship Gold, Not Dirt)
All data for the future should be considered geospatial datetime data. Clean the data prior to publishing it to a queue and writing it to a database. The most efficient method for raw sensor data is to apply an exponential moving average function to clean the data prior to shipment. Our saying is to try to ship the best gold you can, not the dirt. Over the long haul, shipping and storing data is expensive, so make sure the data is as clean as possible prior to shipment and storage.
CNN for Solid Identification of Labeled Sensory Data
For the purposes described in this article, there are well-defined public data sets and machine learning libraries to use as templates for your implementations. Good analysts and solid programmers can implement solid AI in less than six months of effort if given dedicated time to learn and practice with the repositories available. An excellent image recognition repository for understanding CNN (convolutional neural network) with 87 percent accuracy on melanoma recognition is the Skin Cancer Detection Project. An excellent library to understand combining sensors for event recognition is the LSTMs for Human Activity Recognition project by Guillaume Chevalier. Also, this project is the combination of sensor input and the determination of different activities. In a hospital setting, this same methodology works for an array of medical conditions. (For more examples of recent AI breakthroughs in health, check out The 5 Most Amazing AI Advances in Health Care.)
The Future
The application of AI in hospital and health care settings is happening now. Improving the accuracy of health delivery by recognizing critical events through the integration of patient monitoring equipment, wearable sensors and health records has known solutions already being implemented. The extent of the application of AI on the health and financial impact of our futures is incalculable. The barriers to entry are low. Grab your boards and paddle for this wave. You can impact the future of medical costs worldwide.