Under the Hood


Lean in a little closer…closer…whoa, not that close!

I’m speaking (writing) in hushed tones because in this special sort-of-but-not-really-post-post Halloween episode we’re discussing the dark side of AI.

Imagine this nightmare scenario: One day in the not too-far-off future, you’re on the way to work. It’s a beautiful day. The sun is shining, the birds are chirping, your self-driving car is doing all the work while you relax and sing along with your favorite rap-opera-country song. auto car

Suddenly and for no apparent reason, your car violently lurches to the left and drives straight into an artificial tree! Stunned, you emerge a few moments later. The car’s safety features have spared you any physical injury, but the horror slowly sets in that your pumpkin spice latte is gone forever. The EMT robots arrive and begin probing you with instruments you don’t understand as your head clears and you start to wonder what happened.

“Did my constant singing finally drive the car to suicide? Surely my voice isn’t that bad.”

It is, but that probably wasn’t the cause. A short while later a group of software engineers gathers around the crash site to search for a cause. The last strains of Carmen Wrecked My Pickup for Shizzle crackling through the damaged speakers. They mumble to each other in an incomprehensible jargon, reminiscent of an ancient forgotten tongue. After a few moments of silence, some head scratching, more silence, they give each other a knowing glance.

In the morning, they’ll blame it on the hardware.

And now many of you are thinking, “That’s just silly. My mechanic, Bunky, can plug a thing into the car now and find out what’s wrong with it.” Yes, that’s true today, but, on a side note, why are all good mechanics only known by their nickname? Bunky, Cooter, Skeeter…it’s like they’re part of some secret society or something. Or maybe their nicknames hide their real identities so you don’t call them at 3 a.m. when you can’t sleep because your car made a noise on the way home and it’s probably nothing but you’re thinking the car may be possessed by the spirit of Mikhail Baryshnikov because you saw this show on the Discovery channel where…

Sorry, getting off track there. Long story short: I recently had to switch mechanics.

Anyway, cars today are relatively simple compared to the fully autonomous vehicles of the near future. And it’s that complexity that will make it practically impossible to figure out what finally drove your car over the edge. As problems become more complex, rather than programming computers to solve the problem directly, we have to teach them.  I know that sounds silly, but board games may help illustrate what I mean.

When Deep Blue from IBM beat the world chess champion in 1996, it was able to do so basically using brute force. There are only about 220 possible board configurations in chess, and the computer could check all of them in order to determine the best move. It was vastly different last year when AlphaGo, a computer built by Google, beat the human GO champion. GO is an ancient Chinese board game where there are about 2.08 X 10170 possible board configurations. I tried to think of an example to help us get our heads around that number, but I’m afraid I’m just not that creative. Grains of sand on a beach? Not even close. Cat hairs on your favorite sweater? Nope. Number of times your weird uncle has embarrassed you during the holidays? Warmer, but still not there. If you had a million weird uncles and you were all there for every holiday in every country from the beginning of time? Still not enough.

You see my dilemma. It’s just a really, really huge number. So how then did a computer beat the human champion? The engineers let it learn by watching the moves from thousands of human matches. Then, after the AI had a rough grasp of the game, the engineers employed what’s called reinforcement learning. Basically, the computer played millions of games against itself until it was amazingly good.

So, you can begin to see the issue. Start with an incredibly complex problem. Create a very complex AI system, and then let the AI learn on its own how to solve the problem. The result is that we have no idea why the AI is making the decisions it is. And so we may never know what happened with the car in the scenario above. Maybe your Toyota became self-aware like Skynet from The Terminator. dark tech(I realize for most of you “from The Terminator” is superfluous there. It’s obviously not Skynet from Sense and Sensibility. Not saying it’s a bad movie, just a little lacking in the killer robot department. Maybe in the sequel.)

The scary part of all this is that advanced AIs are being implemented in more and more settings where it would seem important to know the why behind the decisions.  Such as: who gets what healthcare treatment, who is admitted to which school, who should be paroled, and, of course, which TV shows should be renewed for another season?  Also, these systems are often trained using existing data sets.  If the data is already tainted with human biases, we’ll need some way to make certain those biases aren’t being carried over into the AI decisions.  Unfortunately as AI systems become more advanced, there will be less insight into what’s going on under the hood. That’s why you may hear them referred to as “black boxes”, we can’t see inside.

All is not lost, however; at least not yet. Very smart people at Google, Microsoft, and other places are working on the problem as part of the larger issue of AI ethics. (This is a good article about the current state of AI ethics if you’re interested.)

Well, that’s it for this episode; I hope it wasn’t too frightening. Remember, if you can’t sleep tonight you can always call your mechanic (if you know their real name). As always, if you have any questions about AI or Machine Learning, or need some movie ideas, drop me a line.



Disclaimer: This Blog is for educational purposes only as well as to provide general information and a general understanding of the topics discussed.  The Blog should not be used as a substitute for legal advice and you are advised to seek additional information from your insurance carriers, Medicare and/or Medicaid agencies for additional criteria and regulations regarding these services.

Practice Makes Perfect – But We’re Not There Yet

In my last blog, we looked at the ability to demonstrate meaningful use that includes specific objectives, milestones, and metric requirements to monitor use of health information. We discussed technology, the certified EHR, the tool to help demonstrate meaningful use by having a place to document health care data that can be easily shared across disciplines, thus, allowing healthcare professional to “picture” the entire patient, not just fragments of them. All this aims to demonstrate, through EHR data metrics, that healthcare costs are decreasing (or at least not rising so fast) and healthcare outcomes to the population are improving. It’s all because we are becoming more efficient.

Any nostalgia for paper charts out there?

Because we can “see” data real-time and it’s not stored in paper charts at some doctor’s office or in boxes in a storage facility. We can show through data that multiple providers are not performing the same diagnostics tests, less medical errors are occurring, and readmissions to hospitals are on the decline. Can the EHR really do this?

It is not as simple as it may sound to put this into practice and be able to get the metrics needed to demonstrate what’s being requested from the healthcare arena, or at least into today’s healthcare setting. Until recently, health care records were paper located at the individual provider’s office. Providers worked in silos, not sharing with others except by fax transmission or postal mail. Even the patients were not provided copies of their information until HIPAA came along and mandated that a patient be allowed to see and have their own information. But even then, the patient had to request it and pay for it. When a patient was referred to see a specialist, one still could not rely on the information getting back to the primary care provider or what did get communicated was just a summary (without all the details).

Even today, the EHR is not a comprehensive record for each patient. Clinical notes and tests ordered by a patient’s various health care providers cannot be viewed from a single record by all providers, nor is there one patient portal for the patient to access their comprehensive medical record. A patient cannot go to Walgreens® and request their pharmacy records from CVS®. As a result, this can provide an incomplete picture of a patient’s health and behavior. Becoming electronic does not prevent health records from being fragmented, which could affect milestones and metric requirements. In addition, the tool itself does not make the data for the measurements; it still takes a human to enter the data into the tool. At least for now (to learn more about that tale visit our Machine Minded blog). However, with little or no standards on data entry, accurate measurements are a challenge. The old saying still holds true, “garbage in is garbage out”.

On the more positive side, while the EHR is not perfect, we are heading in the right direction. It is still much better than paper and it’s a step closer in regards to being able to demonstrate meaningful use. Having a health record available in real-time to providers and patients is far more useful.



Disclaimer: This Blog is for educational purposes only as well as to provide general information and a general understanding of the topics discussed.  The Blog should not be used as a substitute for legal advice and you are advised to seek additional information from your insurance carriers, Medicare and/or Medicaid agencies for additional criteria and regulations regarding these services.

Does new health IT adoption in hospitals actually impact patient outcomes?

In my last post we talked about how to employ a successful health IT implementation at a hospital. After hospital staff accept and get accustomed to the new processes that are brought by the health IT solutions, a natural question that follows would be how effective these health IT solutions are. In other words, how does health IT adoption in hospitals impact patient outcome? Researchers McCullough, Parente, and Town published an article in 2016 on the RAND Journal of Economics examining exactly this question.

To study this question, they compiled IT adoption data from 4000 hospitals as well as diagnosis and outcomes of their Medicare, fee-for-service (FFS) patients during 2002-2007. The IT solutions they looked at are the Electronic Medical Record (EMR) and Computerized Provider Order Entry (CPOE). necEMRs systematically collect patients’ health information replacing traditional medical charts. CPOE allows providers to electronically enter medical orders for patient services and medications, thus reducing opportunities for miscommunication between disparate care providers. They studied the effect of EMR and CPOE on 3 types of patient outcomes: 60-day mortality rates, length of stay and 30-day hospital readmission.

They hypothesize that Health IT solutions positively affect patient outcomes through two mechanisms: 1) clinical decision support, and 2) information management and care coordination. Clinical decision support can include things like providing rule-based treatment guidelines or preventing drug prescribing errors. Health IT can support information management and care coordination because many conditions require extensive monitoring and testing, and generation of large quantities of clinical information. Health IT solutions can be used to capture and organize these data, therefore expediting and improving treatment decisions. When patients need multiple specialists to work together to come up with a treatment plan, IT solutions can help physicians access their colleague’s treatment decisions, therefore reducing communication and coordination barriers.

In studying patient outcome, they focus on 4 conditions: acute myocardial infarction (AMI), congestive heart failure (CHF), coronary atherosclerosis (CA) and pneumonia. These conditions were selected because they are common, mortality is a common outcome and health IT can plausibly reduce medical errors and improve the quality of care.

At first, their research findings suggests that health IT adoption does not affect outcomes for the median patient. As they dug deeper, they found that the actual impact of health IT adoption on patient outcomes is more subtle. They decomposed patient conditions at different severity levels and found that while health IT has no measurable benefits for relatively healthy patients, it significantly decreases mortality for relatively high-risk PN, CHF and CA patients. In other words, the effect of healthcare IT is small for low-severity patients but the benefits from IT adoption increase with severity. Their results also show little support for the hypothesis that health IT improves quality through rules-based decision support. Rather, health IT improves quality by facilitating coordination and communication across providers and by helping providers manage clinical information.

Their findings also showed that health IT adoption affects patient outcomes differently and the effect on conditions varies, too. They found no effect on AMI and no relationship between health IT and either readmissions or length of stay. Rather, they found an average mortality reduction of approximately 200 deaths per 100,000 admissions from IT adoption. The impact is largest for PN where IT adoption is estimated to prevent 500 deaths per 100,000 admissions while IT adoption reduces approximately 10 deaths per 100,000 admissions for both CA and CHF.

These days more and more hospitals are adopting health IT solutions like the EMR (https://healthintegrity.blog/author/hihealtherecords/). This research shows that they are most effective for patients with severe diagnoses and they can reduce mortality rate by improving information management and coordination.




Jeffrey S. McCullough, Stephen T. Parente, and Robert J. Town. “Health Information Technology and Patient Outcomes: The Role of Information and Labor Coordination.” The RAND Journal of Economics. Vol. 27, no. 1 (2016): 207-236.

  • Disclaimer: This Blog is for educational purposes only as well as to provide general information and a general understanding of the topics discussed.  The Blog should not be used as a substitute for legal advice and you are advised to seek additional information from your insurance carriers, Medicare and/or Medicaid agencies for additional criteria and regulations regarding these services.

Mining Our Way to Improved Healthcare

In my previous posts, I introduced the concept of data mining and related advanced analytical techniques.  I also enumerated some of the reasons why organizations are looking to incorporate these technologies and expertise in their toolset.  In this post, I will describe some of the applications of data mining in the healthcare domain.  The healthcare industry in this country is comprised of many different actors with varying objectives.  Healthcare providers, health insurance plans, pharmacy benefit managers, and governmental organizations have attempted to use data mining to manage utilization, improve health outcomes, and maintain financial stability in an increasingly complex economic landscape.

Institutional healthcare providers such as hospitals, skilled nursing facilities, and outpatient clinics have used data mining techniques to develop standardized treatment regimens that have shown success in clinical settings. Comparing the outcomes of different treatment options across patient cohorts for the same underlying indication allows healthcare providers to ascertain the most effective course of action for that disease or condition.  Data mining can also help healthcare providers in tailoring treatment options based on a patient’s profile. Techniques such as cluster analysis and classification trees can enable healthcare providers to create profiles based on their demographics, health history, response to prior treatments, etc. and prescribe real-time treatment for patients based on the profiles and results.

Health insurance companies use data mining to manage healthcare resource utilization and provide quality care to its beneficiaries. This includes, but is not limited to, anticipating future inpatient and emergency events, lowering inpatient readmission rates, and improving outreach to beneficiaries to facilitate timely care.  Each of the above tasks requires the collating and analyzing beneficiaries’ prior claims, diagnoses, lab results, and demographic data.  Health insurance companies can forecast clinical events and readmissions using predictive modeling analytics.  These predictive models can range from traditional techniques such as regression and classification to more advanced techniques such as machine learning algorithms.  techHealth plan sponsors can use these predictive models to develop outreach programs to those beneficiaries who are at the highest risk of an inpatient or emergency room event. These outreach programs help insurance companies limit expensive clinical events and improve overall health outcomes by encouraging beneficiaries to seek timely care and adhere to prescription regimens.



The applications described are just a few of the data mining applications that are prevalent in the healthcare landscape.  But they demonstrate the immense benefit of using data to improve health outcomes in this country.  The challenge remains that health related data remains very disconnected given existing regulations and legitimate concerns of data privacy.  Securely integrating healthcare data across data platforms should be the overarching goal and will require cooperation among the various healthcare entities to implement such a solution.

Data Mining: Why the Gold Rush?

My earlier post introduced the concept of data mining and highlighted some of the domains and applications that have utilized the set of technologies and expertise for their analytical needs.  Many companies are accelerating towards including this toolset as a part of their decision making process.  A casual survey of the current landscape seems to indicate that data mining has paid large dividends for many commercial and research enterprises.

At this point, you may be wondering about the reasons for this spike in interest in data mining techniques.  The simple answer to the question is that there has been an exponential increase in the amount of data generated while the computational cost of storing and analyzing the data has decreased dramatically.

Infographic by Domo


By some estimates, 2.5 billion gigabytes of data are generated each day globally, which represents an annual increase of 23% when measured over the last 2 decades.  From Facebook “likes” to daily credit card transactions and cholesterol measurements, the span of data that is available for analysis is mind-boggling.

The argument therefore is that, with the availability of such data, reliance on human analysts and traditional techniques are no longer adequate to analyze the staggering breadth and complexity of data stores.  Often, information is concealed in the data and is not readily evident by legacy analytical means or human intervention.  Tools such as machine learning algorithms are better suited to search for complicated multi-factor patterns in the data without loss of objectivity.  The cost of such automated algorithms is also much less expensive compared to employing additional analysts and statisticians.

Finally, with the availability of such data, the competitive pressure faced by commercial organizations to transform the data into operational strategies and ultimately market share is immense.  Retail companies are using data mining and machine learning algorithms to forecast product demand and tailor incentives and promotions at a customer level.  Financial institutions are using similar tools to build individual credit risk profiles prior to authorizing lines of credit.  While it is a little early to pronounce that the current state of data mining and machine learning is the panacea for all organizational issues, a recent survey by MIT Technology Review Custom and Google Cloud found that more than half of both early-stage and mature-stage users reported that deploying machine learning and other data mining techniques have resulted in demonstrable ROI.

Now that we know why organizations are jumping on the data mining bandwagon, it is time to delve a little deeper into some of the intricacies of these analytical tools.  The next set of posts in this blog will try to address some of the techniques and their applications.  Stay tuned!

A Play on Words

“I am very interested in telehealth. Is telemedicine the same thing?”

“My doctor’s office encourages me to sign up on their patient portal site.  Is that what you’re talking about in your blog?”

“Are you also going to talk about telemedicine?”

There were many similar questions following my first post last month. The overwhelming response was positive but a number of folks were still trying to separate out telehealth from other related topics with which they were more familiar.  I was just excited that people read it and were interested in the topic!  The truth is that at one time I also had the very same question, ‘Is telehealth the same as telemedicine?’  They are similar and the terminology is often used interchangeably.  In the simplest terms, telemedicine is a component of telehealth.

Telemedicine is the delivery of healthcare services (clinical diagnosis/services) to the patient via telecommunication technologies.  The World Health Organization (WHO) reports that telemedicine can be traced back to the 1800s with the term ‘healing from a distance’.  Really??? 1800s???

By Cqeme (Own work) [CC BY-SA 3.0 (http://creativecommons.org/licenses/by-sa/3.0)], via Wikimedia Commons

But when you think about it, the telephone was invented in the 1800s so at that point being able to communicate with your physician over the phone to discuss a healthcare issue would be considered telemedicine.

Telemedicine can include a range of healthcare services, such as annual physicals, dental appointments, counseling, or even physical therapy, all without leaving your home.  While these are the clinical services of telemedicine, these amenities also involve some kind of education.  ‘Eating better with some exercise everyday can lower your blood pressure.’  Or ‘flossing can decrease your chances of cavities’.  This education received during your tele-medical appointment is, in essence, telehealth. It seems you cannot have one with the other.

Have you ever scheduled a doctor’s appointment and they recommend you to fill out the forms on the patient portal?  This site is where your patient history, test results, and appointments are monitored.  This is a part of your electronic health record (EHR).

* Photo by NEC Corporation of America with Creative Commons license.

(Have you read Ruthanne Romero’s blog?  She discusses the EHR.  Go and take a look!) The patient portal would fall under telemedicine because it is associated with your clinical appointments but it also tells the history of the patient and perhaps informs them on steps they can take to ensure a healthy lifestyle.  Like telemedicine, these next steps can be elements that fall under telehealth: follow-up from your clinician involving telecommunicated services provided by social workers or webinars by other clinical staff (nurses or pharmacists) that go beyond the doctor-patient relationship.

Essentially, telemedicine has taken over the average doctor’s office visit and expands into the land of telehealth.  It is efficient, it saves time and money, and it has only gotten better over the years.  Instead of visiting your doctor’s office and experiencing long wait times, healthcare information is accessible 24/7.  It is quite convenient.

Did I answer your questions?  Are they the same?  Telemedicine provides the clinical services and telehealth includes those services but it is extended with the education that is attained through telecommunication technology.  They are not the same but they are very much related.

Whose Record is it Anyway?


Welcome to Health-e Records. I’m Ruthanne Romero, RN, MSN, HCAFA, CPC, a registered nurse at Health Integrity with a specialty in Healthcare Informatics and Health/Public Policy. In the past, medical records were treated like sacred documents that only your medical provider could read or access. During an office visit, you’d try to glance over discretely to try and see what was being written about you, turning your eyes quickly so you would not be caught when the provider looked your way.  Thanks to Electronic Health Records (EHR), those days are gone. Your medical record is exactly that; it’s yours. Since it is yours, knowing what is in it is essential.

In addition, as information becomes more portable, and accessibility to that information becomes the norm, the need for protection and uses of the data become a factor. While there are plenty of benefits around the implementation of the electronic health record, there are also potential consequences of which to be mindful. It’s important to be aware of the legislation around both the reasons for the implementation of the EHR and the impact when the rules and regulations are not followed.

Each month I’ll be bringing you my thoughts on aspects of the EHR (not HER like spell check wants) ranging from consumer basics to the importance of the medical record.  We’ll cover the benefits of an EHR, facts & fictions, things to look out for, and a lot more. The best way to care for your health going forward is to know your medical history.”

All of my blog posts will be posted here so check back often!

What is Telehealth?

‘Skip the waiting room with online doctor visits. Sign up and earn rewards.’  This was the message that appeared on my cellphone a few months ago and now once a week since then.  When I first saw it I thought, “Wow! They are getting you at all angles with telehealth.”

If you are on any social media site, you will likely see that the marketing for telehealth is everywhere.  Telehealth is the use of technology to support and promote healthcare. It could range from a video conference with your doctor for a consultation to actually receiving physical therapy via the web.  This new wave of healthcare has proven to save money on both ends – for the provider and the patient.  The overhead costs for the provider have been shown to decrease dramatically.  Imagine not having to pay rent for the big office building but only to have the best internet connection! On the patient side, it is great not to have to actually travel to the doctor’s office.

In my upcoming posts, I will be bringing you more information on this complex, new method of healthcare. In addition, I will discuss questions like: What are the benefits of telehealth? How is it being used? And is telehealth for you?

My name is Andrea Lewis and I’ve been studying and working in the healthcare industry for 15 years. Public health is important to me. Because telehealth involves preventative care and education to the community, I would describe it as a technology that is far reaching. If you’ve ever taken a class via the internet or viewed a training or webinar presentation regarding healthcare, then you’ve already taken advantage of this new innovation.  For those of you who have the same interests, I am excited to spark further discussion or perhaps this will be an outlet for those who are seeking additional information about telehealth.  Either way, I hope you’ll be back to read more!

You can check out my bio for more details about me.  I look forward to sharing my thoughts on telehealth and reading yours.

Up ↑