[THIS TRANSCRIPT IS UNEDITED]

NATIONAL COMMITTEE ON VITAL AND HEALTH STATISTICS

March 4, 1998
PUBLIC MEETING
MORNING SESSION

Hubert H. Humphrey Building
200 Independence Avenue
Washington, D.C.

Proceedings By:
CASET Associates, Ltd.
10201 Lee Highway, Suite 160
Fairfax, Virginia 22030
703-352-0091

TABLE OF CONTENTS

Morning Session

Call to Order, Introductions - Dr. Detmer

Panel on Data Quality Standards

Afternoon Session

Reports on Subcommittee Work Plans and Recommendations

Panel on Standards for Computer-Based Patient Records

Future Meetings and Agenda Topics

Reports on Subcommittee Work Plans and Recommendations


PROCEEDINGS (9:35 a.m.)

Agenda Item: Call to Order, Introductions.

DR. DETMER: I would like to call this meeting to order, and thank you all for being here. We start each day by giving introductions, because sometimes the folks in the audience change as well.

I am Don Detmer. I chair the committee.

MR. SCANLON: Jim Scanlon from HHS Data Policy Office. I am the executive director of the committee.

DR. NEWACHECK: I am Paul Newacheck. I am with the University of California at San Francisco.

DR. COHN: I am Simon Cohn. I am the senior consultant for clinical data for Kaiser Permanente.

MS. WARD: I am Elizabeth Ward from the Washington State Department of Health.

DR. HARDING: Richard Harding. I am a child psychologist from South Carolina.

DR. FRIEDMAN: Dan Friedman, Massachusetts Department of Public Health.

MR. GELLMAN: I am Bob Gellman. I am a privacy and information policy consultant in Washington.

MR. BLAIR: Jeff Blair, co-chair of the technical coordinating committee of ANSI's health care informatics standards board, and I am vice president of the Medical Records Institute.

MS. COLTIN: Kathy Coltin, director of external affairs and measurement system development at Harvard Pilgrim Health Care.

DR. MILSTEIN: Arnie Milstein, medical director of the Pacific Business Group on Health, and head of the clinical consulting practice at William Mercer.

MR. TIERNEY: Jim Tierney from NCQA, assistant vice president of information products.

DR. ELSTEIN: Paul Elstein, a technical advisor in the Health Care Financing Administration.

DR. IEZZONI: Lisa Iezzoni, Beth Israel Deaconess Medical Center, Boston.

DR. STARFIELD: Barbara Starfield from Johns Hopkins.

MR. VAN AMBURG: George Van Amburg, Michigan Public Health Institute.

MS. FYFFE: Kathleen Fyffe. I am with the Health Insurance Association of America. I am a new member on this committee. I also sit on the National Uniform Claim Committee and the National Uniform Billing Committee.

MR. FITZMAURICE: I am Michael Fitzmaurice. I represent information technology science at the Agency for Health Care Policy and Research.

MS. GREENBERG: I am Marjorie Greenberg, from the National Center for Health Statistics and executive secretary to the committee.

(Introductions made off microphone from audience.)

DR. DETMER: Thank you. What we have on our agenda today, in addition to what is actually in the book, is to work also on the final language for a letter that we worked on last night, and I want to thank the group that helped work on that.

What we will do, since our panel is here, is go ahead and actually get into the panel, and then deal with the letter before lunch.

Then, this afternoon, we will hear back on our work plans from the subcommittee groups, and then have a panel on the computer-based records and standards.

Our first panel this morning will start with Kathryn Coltin, who is on our committee, and then the way we have listed them in the book -- and I don't care how you other three gentlemen choose to do this -- but Paul, Jim and Arnold were actually as we ordered it. So, I will call on Kathryn to get us started. Thank you.

Agenda Item: Panel on Data Quality Standards.

MS. COLTIN: Thank you. I am going to really just provide an overview from my perspective, of what some of the most important types of data are, that are needed for quality measurement and quality improvement, and then to speak more specifically about some of the data quality issues that affect those various types of data.

From my perspective, the most important types of data that are used for both quality measurement and quality improvement include demographic and enrollment data, diagnosis codes, procedure codes, pharmacy data, laboratory data, service dates, and some identifiers.

That really comes up in terms of both provider identifiers, in the case of measurements that are conducted at levels below health plans, for instance, medical groups or individual physician-based measures, as well as for some types of quality measurement patient identifiers, particularly measures that are based on patient reports -- functional status measures or reports of experiences with care, satisfaction and so forth.

The data quality problems I am going to talk about in two categories, first administrative data quality problems, and then survey data.

With regard to administrative data, there are four different types of data quality problems that we have observed.

The first is completeness of the information. Second is timeliness. The third is accuracy and the fourth is consistency, and I will talk a little bit about each one of those.

There are several factors that affect completeness of data, particularly within health plans. Within the completeness category, I actually think there are two subcategories.

One is just no data, no encounter data at all coming in. Then the other is missing data fields. So, you are actually receiving encounter data, but it is incomplete.

Reasons why health plans don't have any encounter data are related to a number of different factors. One of the first and most common is carve-outs, that purchasers may decide that certain benefits are going to be covered through other mechanisms and not through the health plan.

Therefore, the data related to the services that are carved out are not available to the health plan in most cases.

The most popular carve-outs are mental health benefits and pharmacy benefits. So, often health plans may have information about the outpatient and inpatient experiences of members, but not have access to pharmacy data because they, in fact, have no fiduciary relationship with the pharmacy benefits managers, and the same can apply to mental health information.

This can become a problem in areas where there is a need for data that cuts across those service sectors, particularly in the case of common problems that may be managed partly in primary care and partly in specialty care.

That can be true of some of the more common mental health problems, like depression, as well.

Another area where plans may not have any data at all is that they simply have not written any provisions into their contracts to obtain the data.

That is often true in the case of laboratory data, for example, where plans may not be receiving any information about laboratory results, because they have not written that into their contract with the laboratories with whom they contract.

That means that, from a quality perspective, if you are trying to look at the extent to which abnormal tests are being followed up appropriately, you are not able to do that at all, because you don't even know who had an abnormal test result.

This can become problematic in cases of liability, where both the health plan and the provider may be the targets of a suit. Yet, the health plan may actually have no access to the information to have prevented the particular problem, the failure to follow up, for example.

Another reason why plans may not have any encounter data has to do with the payment and compensation incentives, which are built into their arrangements with various types of providers.

Under capitation arrangements, where medical groups or labs or hospitals may be paid a set amount per member, those particular providers may not then, in turn, submit detailed dummy claims or encounters back to the health plan.

So, this can become a problem in terms of even where it is written into the contracts. It is difficult to monitor, sometimes, whether you are getting everything.

So, some providers may be complying, but they may not be sending everything. So, you don't know, are you getting 100 percent of the encounters, are you getting 50 percent of the encounters. You are getting something, but you really don't have information, many times, through which to assess how complete it is.

One of the ways that it is often dealt with is feeding back information to the medical groups and saying, you know, your utilization rates look really low and we wonder why we are paying so much, in which case, then, sometimes the data will be forthcoming.

If you don't have feedback of data, you often can go on thinking that you have complete data, and in fact, not have complete data.

The other problem with completeness that I mentioned is missing data fields. That, too, can be related to payment incentives. What is the payment incentive to provide complete information about all of the diagnoses that a patient may have.

In another case, what we frequently will see is bundling of procedure codes. So, one of the problems we have with reporting childhood immunization rates, for example, is that they will get bundled into a well-child visit code, rather than itemized individually on a claim.

Therefore, it becomes difficult to use claims data to report on immunization rates.

We have found, in our organization, clear differences in the number of diagnoses that appear on encounters based on the payment mechanisms.

So, for the proportion of the provider network that is compensated on a fee-for-service basis, we are much more likely to see multiple diagnoses reported on the professional services claim, whereas in medical groups that are capitated, we will very frequently only receive the principal diagnosis and no secondary diagnoses.

That can, of course, be a problem, not only in terms of identifying patients who may be appropriate targets for quality improvement interventions or for disease management programs and so forth, but also in terms of looking at case mix and whether or not payment levels are adequate to provide necessary services for the population.

Another reason for missing data fields is lack of EDI capability in a lot of managed care organizations.

If, in fact, they are processing paper claims, there is a manual cost of data entry that may lead plans not to enter all the data fields that are coming in.

The data may be arriving at the plan in a complete state but, in fact, what gets into the plan's automated systems may be only a portion of the information that is actually coming in, depending on the extent to which their systems are EDI capable.

Here, again, by not feeding data back to those who provide it, these kinds of missing data problems are more likely to persist.

When you start feeding data back, if you are giving case mix adjusted utilization information back to physicians, that creates an incentive for them to give you more complete information about the case mix of their population, so that they can explain or justify why their utilization rates may be what they are.

Likewise, in terms of procedure information or compliance with performing certain kinds of procedures when recommended.

If that information is incomplete but it is being fed back, it creates an incentive to try to submit more complete information next time, so one does not appear to be out of compliance with recommended standards.

The next problem that I talked about was timeliness. Here, again, payment incentives come up because in a situation where cash flow is affected, you are going to receive the data in a much more timely way than you are under a prospective capitation payment, where nothing really is dependent on how quickly the provider gets that information to the health plan.

Most of the adjustments are made retroactively, if at all. Therefore, they will take their time getting that information out, and that can be a problem.

In and out of area issues also can affect timeliness. We find that in the case of services that are provided outside the service area of an organization, most of those providers will tend to bill the patients correctly. Then there is this delay in terms of the claim then getting forwarded from the patient to the health plan. So, it adds time to the processing.

That also can be true in areas where patients use non-contracted providers as opposed to contracted providers, and that is very common under point of service plans.

So, here again, the claim may go directly to the patient and then from the patient to the health plan, and that creates a delay.

Also, EDI capability is another factor in timeliness, how quickly the data can be entered. We have read recently about a number of health plans that have had problems with backlogs of processing claims, and the problems that that has actually created financially for some of them.

I think there is clear evidence out there that timeliness is affected by how quickly claims are actually being entered into the system and paid.

A lot of the complex logic, even, for processing these claims can have an impact on timeliness. It will create variable percentages of pendant claims that will require manual review. That, of course, then slows down the information processing as well.

Factors that affect accuracy of the data elements that are received are also, again, payment arrangements. You are starting to hear a theme here, I think.

This is true in fee for service. It is true in managed care. I think that issues like case mix adjustment can lead to code creep.

I think that medical necessity screens can lead to entering the information that you know is more likely to pass the screen and get the service paid for. So, those are issues that can affect accuracy of data elements.

A common practice of trying to mask sensitive diagnoses occurs in managed care, just as it occurs in fee for service. I think that is a fairly common practice.

The other factor that we see quite commonly is rule-out situations, where the diagnosis will be entered but it is really rule out, and it is there perhaps to justify the ordering of a particular test or procedure.

What that does is create problems in terms of mis-identification of patients who maybe should receive certain services when, in fact, they are not patients who should receive those services. They were falsely identified because of a rule-out diagnosis.

Consistency problems are generally due to lack of standards. I think there is a lot of hope that the HIPAA standards will really help to improve this situation.

With regard to administrative transactions, there is a lack of standards regarding which data elements are actually captured and which are included in the data sets that the plans maintain.

There are inconsistencies in data definitions and there are clearly inconsistencies in code sets. A lot of plans have home grown codes, and those particularly affect the treatment-type codes and some of the revenue codes as well.

Provider IDs are another area where there are a lot of inconsistencies. In my organization, we have been through four mergers in the last 10 years. Not one of those plans has used consistent provider identifiers.

So, when you are trying to build a single provider directory or provider file, you get into all of this matching algorithm situation in trying to say, well, this Dr. Smith is really the same Dr. Smith who is known by these other three identifiers in this other file.

So, when you are trying to produce quality measurement data at the provider level, it can become quite messy when you don't have standard identifiers.

That is also true for patients. A number of the HEDIS measures require certain lengths of continuous enrollment for patients to be included in the measures.

Without a consistent patient identifier, when patients change health plans, or change coverage, when they are perhaps the subscriber in one situation or a dependent under another subscriber, after they are married or some other situation may change, they can have totally different identifiers, and then it becomes a problem, trying to link those patients.

Code sets I mentioned. That is also a problem, even in plans that have computerized patient records. We have three different computerized patient record systems in our organization, and they are inconsistent.

They are inconsistent in what data elements are captured, inconsistent in what code sets are used.

Turning for just a moment to survey data, I think the biggest problem that we have observed with regard to the surveys that are done with members of managed care organizations is response rates and response bias.

There has just been tremendous over-surveying of members of managed care plans. That surveying is done at the level of individual physician offices or medical groups.

It is done by health plans. Health plans do surveys of provider and visit satisfaction, member satisfaction, and they do service or condition specific surveys in connection with quality measurements.

Employers are surveying their employees about their satisfaction with the different health plans they are in.

Business coalitions and benefits consultants are surveying them. You can get on the internet and see, whether it is SAC's group or others. You know, they have done their consumer satisfaction surveys in different markets.

The government, as you are well aware, does an awful lot of surveying, at the federal, state and local levels.

Pharmaceutical companies are increasingly doing member satisfaction surveys of members of different managed care firms.

Care Data Reports is probably one of the most common, that people may be familiar with. Of course, there are other market research firms that are out there doing these sorts of surveys as well.

The poor patient is being bombarded with surveys. That has had a real effect on response rates. Clearly, there are also confidentiality concerns that can affect response rates, particularly as it applies to condition or service specific surveys.

So, how satisfied were you with the mental health services that you received. Patients can be quite upset receiving a survey and saying, who knew I received mental health services.

In the case of particular conditions, where one might be looking at outcomes -- health status outcomes, functional status outcomes and satisfaction with care experiences for asthmatic patients, people are concerned, how did you know I had asthma.

There are clearly issues there that affect not only people's willingness to respond to these surveys, but in fact their anger over receiving them as well.

So, those are all issues that affect response rates and, therefore, create problems of response bias.

The most common survey that is used for comparing health plans -- the NCQA member satisfaction survey -- had an average response rate last year in the low 40s.

Most reputable survey research firms wouldn't even report out survey results with response rates at that level.

To make matters worse, the range of responses was from 17 to 65 percent.

An analysis of non-respondents in a pilot, where telephone follow up was used, showed that non-respondents were more likely to be male, more likely to be minority, more likely to be healthy, and more likely to be more satisfied.

The results that are being reported based on these low response rates can actually create mis-impressions about care.

A lot of these data quality problems that I have mentioned translate into quality measurement and quality improvement problems that can be quite important.

The incompleteness of data leads to an inability to assess some very important aspects of care, like follow-up of abnormal laboratory results.

It can create inaccurate measures and inaccurate reports that, then, undermine the credibility of the data, and therefore the willingness of providers to act on those data for quality improvement purposes, even though real quality problems may exist.

They create a dependence on chart reviews, and those have attendant higher costs. They create higher risks of threats to confidentiality when, instead of just looking at the very limited data elements that one needs to create a quality measure, you have to flip through the medical records trying to find particular data elements in order to assess quality of care.

It leads to an inability to target needed interventions to members who would benefit from those interventions, because the data isn't there to identify the target populations.

That is particularly true on diagnostic information.

Timeliness really can create problems with regard to the inability to implement things like concurrent reminders in a timely way.

By the time you are able to tell a woman who delivered a baby that she should have a post-partum visit within eight weeks, you are already past the time that she should have had the visit, because the claim lag is too long.

The ability to improve compliance with post-partum care is affected by timeliness of claims data, when they are the only available source to do that sort of thing.

Accuracy problems lead to: misreporting of quality measurements; inclusion of patients in denominators to whom the standards of care that you are measuring against don't even apply;

Misdirection of quality improvement interventions and undermining of patient confidence in those situations where they say, well, I don't have this problem, why are you sending me this;

Mis-allocation of improvement resources to patients who really don't need those resources and wouldn't necessarily benefit from them.

Consistency problems lead to the inability to make accurate performance comparisons and to appropriately identify quality improvement opportunities, and the inability to bench mark best practices and to learn from others who are doing better.

If the data are not consistent and not comparable, it really impedes one's ability to improve.

It also can affect decisions inappropriately, based on differences in measurement that are not really differences in performance. So, that can be a problem, whether you are talking about choice of a health plan or the targeting of QI resources.

These inconsistencies can make it look like one plan is doing better or worse than another, when in fact what you are observing is merely a measurement difference and not a performance difference.

I think that some of the quality problems that stem from survey data problems really are, you know, the common ones.

They are not unique to quality measurement or to health plans, but they include things like an inability to generalize to the universe from which the sample is drawn, possible overstatement of problem rates or dissatisfaction if, in fact, response bias is leading to underreports by those who are healthier or more satisfied; possible overstatement of case mix in those situations as well. Thank you.

DR. DETMER: I thank you for that overview. Just before we call on our three guests, what this sort of has to do with our business is, I think the issue that I hear, at least, emerging from a number of the quality discussions that I have been hearing, in some instances from members of the panel this morning, that we ought to be moving, if we can, to some standards for standards.

Are there, in fact, a set of both dimensions and then items within those, that we could perhaps help on some metrics development.

We have a subcommittee on data needs standards and security as part of the HIPAA legislation.

I think, at least from my own personal view, that is really a central kind of question. So, after we have heard from you, I think we will want to dialogue on a variety of issues, but I certainly personally will be interested in that one. So, thank you. Paul?

Agenda Item: Panel on Data Quality Standards - Paul Elstein, HCFA.

DR. ELSTEIN: Thank you. I would like to talk a little bit about what HCFA had planned to do with HEDIS, two other surveys to add to Kathy's list of numerous surveys that we are doing, talk for a little while about the validation of the HEDIS data, and finally, maybe some next steps.

Kathy and Arnie and Jim and I are part of the massive collaboration -- and HEDIS is a collaboration of partnerships, consumers, plans, purchasers, experts, and others.

When HEDIS first began, the first really two iterations of HEDIS -- actually three -- were for the commercial population.

HCFA felt that it was important initially to get the Medicaid population. What came of that was something called Medicaid HEDIS.

It came through a grant from the Packard Foundation and work with NCQA and a wide variety of people, and states and others.

In 1995, we felt it important to add Medicare measures because the HEDIS measures were the 64 and under population only at that point.

Our goals were three, and in looking at the HEDIS data, they may be compatible, they may not be totally compatible.

The first was plan-to-plan comparison. Allow consumers to choose among plans, and perhaps where appropriate between managed care and fee for service.

Second was for what we call external review. We would give the data to the peer review organizations in each state.

We would give it to the HCFA staff, both in the regions and in central office, who do monitoring every two years with each plan, and see where there are areas for improvement.

The third area was for internal quality improvement, have the plans use the data themselves for better quality of care.

I want to really make it clear that this was the first time that HCFA or Medicare measures had been applied. We had about 32 or 33 measures, not per plan, primarily per contract.

Even further, we found 15 cases where there were large Medicare contracts that covered geographically disbursed areas.

In those 15 cases, we broke the contracts themselves down into contract markets. So, for example, a large plan in Texas had a contract that covered Houston, Dallas, San Antonio, and the surrounding counties.

We asked them to report the HEDIS measures for those three areas separately.

On the quality of care side -- I would like to focus primarily on that -- we asked each contract to report four measures: breast cancer screening, which is defined as the percentage of women aged 52 through 69, who received a mammogram in one of two years; a beta blocker treatment for patients who were discharged with acute MI from a hospital who received a beta blocker prescription; eye examinations, actually retinal examinations of patients with diabetes on an annual basis; and the fourth measure was an ambulatory visit after a treatment in a hospital for mental illness.

In other words, the patient received an ambulatory visit within 30 days after the discharge.

We asked each contract market to report these and a wide range of other data to NCQA in summary form by last June 30.

We are very pleased that, in spite of numerous problems with the data, with entering the data, with gathering the data, that nearly every plan made the deadline.

We also asked each contract to report what we call raw data, the numerator and denominator data, to NCQA last fall.

So, at that point we had a large amount of potential data. We felt it critical, though, to have a validation of the data, really for three reasons.

One was to help ensure the accuracy of the data, which is pretty obvious. The second, which builds on that, is to facilitate consumer comparison among plans.

Being a consumer, choosing between one HMO and another, you want to be pretty sure that those data are accurate.

Third -- and this was a byproduct, but it seemed to have worked out pretty well -- to actually improve some of the HEDIS specifications or the measures.

So, what HCFA did was contract with the Island Peer Review Organization, which is also the New York PRO, to lead the validation.

The New York PRO subcontracted with two other PROs -- Minnesota and Wisconsin -- and worked with Medstate as well.

They looked at the four quality measures that I mentioned previously, plus a utilization measure called frequency of selected procedures.

That measure looked at 10 procedures such as angioplasty, total hip replacement and cavage, to look at total utilization.

What our PRO was two things primarily. One was to do a baseline survey of the 278 -- I think -- contract markets.

They got a wide range of answers to some questions, but they did a more in-depth look at 60 contracts which were actually represented by 75 markets.

Those contracts, while 68 or 75 is one-quarter of the total contracts reported in HEDIS last year, was stratified to somewhat over-represent the larger contracts.

So, while we had 25 percent of the contracts audited, it really represents about two-thirds of the Medicare beneficiaries.

What our PRO did was spend several day at each contractor that was doing the outside audit, looking at their information systems. How did they collect the data, what kind of systems did they have to report the data, and basically analyze the capabilities of each of these 75 or so contract markets.

It is still important to realize that, a, 75 is not 275 and, b, since there was an over-sampling of the larger plans, although not every larger plan was audited and some smaller plans were audited, the data and the analyses might not be representative of all the contracts reported to HEDIS.

I received the draft report from our PRO yesterday. I have not looked at it, but I will look at it today and others in HCFA will look at it in the next couple of days.

We have been briefed by our PRO informally. Those of you who read Medicine and Health this week -- and I had not read it until 15 minutes before this meeting, but I knew what was in there -- know that there is some discussion of our PRO audit and some of the findings.

Once we have the full report, we obviously will be in a better position to talk about it. But this is very clear.

As anyone in this room knows who is in managed care, just like if you see one HMO you have seen one HMO, if you have seen one HMO system, you have seen one HMO system. Each plan has different capabilities in information collection.

Some are far more sophisticated than others. That is number one. Number two, none of these contracts had ever done Medicaid HEDIS before and many had never done HEDIS before. Their commercial employers had not required it, the states had not required it for the Medicaid population.

Third, our PRO recognized a lot of people at the plans are working under very difficult circumstances to try to get good data.

I think fourth is something that Kathy talked about, some of the inherent limitations with no good encounter data in many contracts.

We are planning to do a more intensive validation this next year. We will be looking, not at the 75 contract markets, but everyone in depth.

In other words, rather than looking at a quarter of the contracts, we are looking at all of them on an on-site business.

We will be looking at a sample of medical records. The previous audit did not look at any medical charts. So, we looked at how they got the data, how they collected the data, but not at the charts themselves.

Third, we are proposing a pilot of source code information. Fourth, and maybe this is at least as important as the other three, this audit will be done not after the data come to NCQA, but prior to submission of the data.

Our PRO and its potential contractors will be visiting the plans beginning in late April. The data that goes to NCQA will have been audited, validated data.

Two surveys I would like to talk about and then maybe close on the future.

At the same time that HCFA required all Medicare plans to report HEDIS, we told them that we were going to require them to use the CAPS survey, which was put together by ACPR, Harvard, RTI and RAND.

What this survey may do is overcome some of the weaknesses that Kathy mentioned, that many plans do their own surveys of beneficiaries.

This one is a standardized survey. It is going to be administered by a third party under contract to HCFA.

The plans received notice maybe two weeks ago that this CAPS survey was going out to 600 members per contract. The surveys have been mailed out. I believe last week they began going out.

We are aiming at a 70 percent or higher response rate, which seems far higher than the 40 percent that the other surveys mentioned. However, we do know that Medicare beneficiaries are more likely to answer surveys than people under 65.

So, we are cautiously optimistic. It is a mail survey followed by a second mail survey -- we will send a letter -- followed by a telephone call.

In May we are doing another survey. It is called Healthy Seniors. Now, Healthy Seniors is a HEDIS measure. It is actually a misnomer, which I am trying to point out to our people.

While it originally was only for the 65 and over population, it now is going to be fielded for the Medicare disabled as well.

It is similar to the CAPS survey, in that it is a mail survey followed by a letter followed by telephone. One thousand members per contract will receive it, randomly selected.

It is basically built upon the SF36. In other words, the functional status, mental health, but a modified SF36.

They will look at a patient, a baseline. For those patients who are there two years later in the plan, they will receive the same tool again.

So, we will be able to look at outcomes, whether their functional status has improved, deteriorated, or remained the same over two years.

The Healthy Seniors measure will be going out, I said, May 15. What we are trying to do is avoid having it follow too closely following CAPS.

That was, as Kathy said, beneficiaries get survey after survey. We are trying to make sure that in every contract, except for the smaller one, no Medicare beneficiary receives both the CAPS and the Healthy Seniors survey.

Because the number is 1,000 in Healthy Seniors, some of the smaller contracts, that may not be possible.

Finally, on this, everything I have said applies only to managed care. We want to do very similar activities in fee for service.

We have a contract with Health Economics Research and New England Research Institute to do what we call fee for service performance measures. Lisa, I believe, is involved with that as an advisor.

DR. IEZZONI: Marginally involved.

DR. ELSTEIN: Marginally involved.

DR. STARFIELD: Clarify what you mean by fee for service. There is fee for service managed care. Do you mean indemnity? What do you mean?

DR. ELSTEIN: What this will actually be, because unit accountability is so difficult in fee for service, the pilot will look at two things. One is the geographical area.

We are going to look at certain areas using Medicare claims data in the fee for service side versus the HEDIS data.

In four or five areas we are looking at large group practices, who have agreed to take part in this project.

I think probably the greatest difficulty is trying to find the unit of accountability. Is it the physician? Is it the small group?

In managed care, we hold, or HCFA holds, the plan or the contract accountable for all the patient's care. We know that in many managed care plans they use a number of group practices, and many physicians practice in multiple plans, and also see fee for service patients.

The pilot is going to be looking primarily at some large group practices and some geographic areas to see how we can adapt performance measures and the two surveys to the non-managed care setting.

Finally, the committee on performance measurement met the last couple of days -- Arnie, Kathy and me -- they know more than I know -- but we are going through another round of HEDIS this year.

While the Medicare measures that the plans are going to report for 1998 are very similar to the 1997 measures, we do foresee major changes down the road in the HEDIS measures.

We see, when Medicare Plus Choice gets finalized, provider networks and other kinds of arrangements, many, many more contractors will have to report HEDIS measures.

I think we are working to improve the data that are reported, both through the encounter data initiatives that were laid out in HIPAA and also through our ongoing activities.

I think really, in summary, it is important for us to know this was on a trial basis last year. We have learned a lot from it, the plans have learned a lot from it. I think regardless of what we learn from the audit, we can do much better.

We do plan to make these data available to the public soon on a summary basis, and we feel that it is necessary to put down the caveats as well as relevant issues as in any kind of public reporting. Thank you.

DR. DETMER: Thank you, Dr. Elstein. Mr. Tierney.

Agenda Item: Panel on Data Quality Standards - James Tierney, NCQA.

MR. TIERNEY: Good morning. NCQA is often recognized, sometimes accused, as being the one responsible for shepherding HEDIS and the evolution of HEDIS.

We do this because it is consistent with our mission to provide quality information to the marketplace. Specifically, we try to aim our efforts at employers and consumers.

We think that these efforts will help drive quality improvement and create a dynamic which rewards accountability in the managed care arena.

NCQA has some experience in looking at auditing HEDIS performance information in the industry. We started that activity really back in 1995, when we piloted a report card initiative, where 21 health plans came forward and produced HEDIS results, and invited NCQA in to audit that data.

The findings of that were published. Some of the issues that were common in 1995 we are still see today.

Since then, we have broadened our scope and have done over 100 HEDIS audits between 1995 and today. We also collected HEDIS information from health plans over the last three years and, last year, collected over 300 health plans HEDIS submissions, as well as the work we did with HCFA collecting the Medicare HEDIS submissions.

We have seen a need for standardizing the approach used for auditing HEDIS measures. It has sort of taken on a life of its own over the last three years because of demand in the marketplace, to use this information for comparison purposes and to make informed choices.

So, NCQA convened a panel a couple of years ago to try to work on standards and guidelines that would be used to audit HEDIS information.

We have since put together those standards and guidelines and a methodology to use when auditing those data, piloted over the summer, and have started licensing and certifying auditors to do this work that is starting to take place in 1998.

We also have a lot of experience, in that we have a technical group that supports health plans as they try to put together their HEDIS measures. It is our HEDIS users group.

They deal with interpretation issues, clarifications to the standards, as well as deal with some of the common issues that are out there in the marketplace, and try to guide health plans in how to manipulate their systems so that they are within the intent and the specifications of the HEDIS measures, knowing that we have imperfect systems out there, and they are trying to manipulate those systems to meet a certain set of standards.

All that said, we are starting to see audits are going to be required. We are seeing that the state of Texas, for example, is requiring audited HEDIS data to be submitted in 1998.

HCFA is requiring it for a subset of measures. Business coalitions, such as CCHRI, New England HEDIS Coalition and others, have been requiring audits of their HEDIS data, and using those HEDIS data to make comparisons between health plans within certain markets.

NCQA is moving toward incorporating performance measurement into our accreditation standards process. We feel it is important not only to use those HEDIS measures, but it must be audited HEDIS measures, in order to make an accurate assessment of health plan performance.

We are using the experiences we have had over the last several years to try to correct some of the problems that we see out there in the industry.

Last year we did a study on the data we collected from the over 300 health plans. We are seeing a striking variation between the performance of health plans.

Some of those have been audited data and we feel that the variation is really based on performance.

Other issues, though, and other variations are systemic to the data issues that we are seeing out there in the industry, which creates a problem for anyone who is trying to make comparisons or trying to do quality improvement activities.

The issues that we are finding really can fall into two categories. One, there are problems with data quality or data collection issues. Some of those experiences Kathy talked to, and I can talk briefly about our experiences.

Others are with the interpretation of specification. HEDIS, we like to think, is fairly straightforward. The reality of the situation, in some cases, they are very complex measures and very complicated to implement.

There are issues that we are finding with interpretation that erode at the comparability of measures between plans.

So, part of the audit activity that we have created not only looks at data quality issues, but it looks at interpretation issues and algorithm compliance, so that we are measuring apples to apples, or create a dynamic where we are able to do that.

The HEDIS audit methodology that we have created is an integrated program that looks at the processes and data capture mechanisms that a plan uses plus, once that data comes in the door, how it is manipulated to conform with HEDIS specifications, and we report those HEDIS results.

What we have found is that the validity and the reliability of the clinical data that is captured within the health plan is highly variable.

Sometimes the frequency -- there is a frequent conflict between different data sources that describe the same event.

For example, we have done studies that look at administrative data sets and compare them to medical records, and we are finding that there is a contrast between those two data sources.

We looked at episodes of care where we have taken information from different data sources, whether it is from a carve out vendor as well as some of the medical administrative data that they would have on site, and there are differences in continuity there.

We also have shown -- we have had studies which show that often claims don't reflect the clinically important patient conditions. So, historically, the claims data has been a payment mechanism and has not paid as close attention as we would like to clinical information which is important for managing care.

That has been problematic with the development of measures as well as the implementation of measures.

Some of the problems that we have seen that I think are systemic with the system in general -- and Kathy has touched on some of those -- but payment mechanisms such as capitation often affect data capture rates.

We are also finding that oftentimes health plans do not even measure the data completeness level. So, sometimes it is unknown whether they are playing with a full deck or a partial deck of data.

Public health services that are often available within certain regions are not reflected in the health plan's information system.

We have issues of trying to integrate that data. Patient identifier is often not the same. Provider information is not often the same. So, it is problematic for health plans to try to integrate that information into their administrative data set, so that they can do appropriate measurement.

There is little incentives for providers submitting data, timely data. That, I think, is more problematic in the encounter arena, where providers are capitated for services.

There is a sizeable data entry backlog which Kathy mentioned. Payment arrangements such as global fees often affect the specificity of the data collected.

With that, we find that there are mapping issues where global fees oftentimes promulgate the use of home grown codes, so that they can pay those services accordingly.

Mapping those proprietary codes to standard industry codes often are problematic and you lose some specificity.

We have found in our auditing that claims and encounter data often can be processed with missing information because the focus is on payment rather than clinical information. That is problematic for obvious reasons.

Claims can be processed with incomplete or using dummy or proprietary codes. We are finding that, because of space considerations, limited historical information is often kept on file.

We are finding that, in situations where disc space is a concern for some health plans, in other situations where, with mergers and acquisitions and system conversion activities are occurring, we are finding that only recent information is being converted and the legacy systems and data that are associated with that legacy system oftentimes is not converted and just stored on tape.

This makes it very inaccessible to doing performance measurement when you look back historically.

We have found that there are some data quality issues that are out there in the industry. Claims processors often have too much discretion and inadequate oversight in keying data.

We are finding that benefit structures often impact the quality of information received and, in some situations, providers are being put in the position to be a patient advocate.

I can relate anecdotally a story that is personal for me. As I slowly bring my wife from the fee for service to managed care arena, she had reason to visit her podiatrist for a condition.

Her podiatrist told her that the service that he would have to perform really should be done at the PCP level. So, he was going to code it differently so that it would be covered under a specialty visit.

When she came back and said, oh, my podiatrist is so great, this is what he has done, I just started to shake my head and wonder what other types of issues are out there.

It is not far-fetched to imagine -- I also am the father of twins. My wife went through several sonograms for her pregnancy.

It is not inconceivable to imagine that if a patient went to a provider and said, I want more than just one sonogram which is covered under my benefit, and can I get another one and that now is coded as a complicated pregnancy versus a non-complicated pregnancy.

If a health plan is doing a study on outcomes on high risk pregnancies, they are getting faulty information to do that measurement.

We talked a little bit about proprietary codes already. We are seeing that there is insufficient coordination with vendors, and this addresses the carve-out issues specifically with pharmacy and mental health.

Oftentimes the health plans will delegate the management of those services, as well as the reporting of those services.

That is problematic when you are trying to do measurement. What you really need is a repository that has data on all services that are being rendered for a particular patient.

There are problems just with operational aspects of what seems a fairly straightforward process of a services rendered.

It is coded on a form. It is given to a data entry clerk and it is entered into a system. What we are finding in some of these forms is that there is a limited number of diagnosis and procedure codes that are available to be entered.

Again, the forms are focused on payment rather than quality. We have seen several forms out there in the industry where a provider is given a list of diagnosis codes that are available to code to. For ease of look up, they are coded in alphabetical order.

When those forms then come to the health plan for keying, the first code that is entered is usually the primary diagnosis and all subsequent codes are the secondary.

So, we see a lot of primary diagnosis codes beginning with the letter A. Again, that is problematic when you try to do any types of measurement.

Performance information resides on numerous information systems and it is very difficult to integrate those files.

Mergers and acquisitions and moving from legacy systems that have been designed for payment mechanisms, to a system that is more user friendly and allows clinical data to be available for measurement, that transition is often problematic for many plans, and requires a lot of resources, both financial and internal personnel resources.

In the creation of central repository, where you have data scattered all over, it is presenting significant challenges to health plans.

It is not that they are not trying to address these issues, but there are some significant challenges out there that they are trying to cope with.

There is inadequate oversight of vendor data. Oftentimes we see that not only the delegation of the management, but the delegation of the data processes that are used to collect data on mental health and other carved out benefits lack adequate oversight from the plan's perspective.

There are insufficient standards out there for those data capture mechanisms, and inadequate documentation.

To the extent that a health plan is receiving information from a carve-out vendor, there is little practical knowledge of how that data is captured and represented in those EDI transfers.

With all that said, some of the immediate action that I think this committee can assist in, is in encouraging plans to improve on the capture and use of currently available data, and to create an environment that rewards the automation of data.

We have seen that the processes that have evolved, where a provider is making notations on a form and sending that form to a health plan or a data processing clerk, is entering that information or is coming through in an EDI form and pending for form and manipulation, that process is fraught with errors that can occur.

To the extent that we can automate that data and make the entity that is responsible for reporting and manipulating and reviewing that data, the one that is responsible for the data quality, we think we are going to go a long way in trying to address some of the quality issues that are out there in the industry.

We can give some guidance on data management practices. There are some plans that are very adept at doing this kind of work. We are finding other plans that are significantly under-resourced and do not have the capacity or, in some cases, the knowledge to manage the data in a way that it should be, as well as being capital planning for investments in information infrastructure.

We are seeing that the industry as a whole is trying to play catch up in information infrastructure, and it is going to take significant amounts of capital investment to get it to the point where we think it needs to be.

Over the next three to five years, we feel that federal standards for structure and content, definition and coding of elements of the medical record would go a long way.

This, I think talks a little bit to, Don, your specific interests, as well as federal legislation governing standards for privacy of individual identifiable health information.

We can encourage vendor software that automates the patient record to be in compliance with standards, and encourage communication technology for the sharing of data.

We don't envision a system where all data is going to be held in one repository. So, to the extent that we can encourage the technology to help with the transfer of that data, I think it would go a long way. Thank you very much.

DR. DETMER: Thank you. We will be engaging in a fairly broad discussion here in a bit, but I wanted Arnie to get his thoughts on the table, too.

Agenda Item: Panel on Data Quality Standards - Pacific Business Group on Health.

DR. MILSTEIN: Thanks. My wife is a social scientist and she advises me never to participate in something that involves one-way exchange for more than about 30 minutes.

I realize we are at about 60 minutes now, and I also realize that my track record of going against her advice has not been a very good one, but let me proceed anyway.

DR. DETMER: We are a well conditioned group.

DR. MILSTEIN: Okay. Thanks for the opportunity to speak with you. I come to you as a clinical advisor to what I would call the quality activated end of the purchaser spectrum.

I am coming to you with the belief that you can play a pivotal role in the ability of both the buy and the sell side of the health care market to advance quality.

I have organized my comments in two parts. First, I am going to help you see the current equilibrium through our eyes as purchasers, and secondly, advocate for some specific improvements.

I do so without even a beginning knowledge of what your starting point is. So, if I am pulling you through territory you have already been, I apologize in advance.

The world of quality and its health data infrastructure is certainly not a pretty picture from the point of view of any of the purchasers who are increasingly able to understand what this really looks like.

The features of the landscape from our perspective go something like this. In health care we, in the majority, don't know what the right thing to do is.

For what we do know to be right, we are not capturing the necessary data elements to ascertain the frequency with which rightness is happening.

When we are capturing the right data elements, their accuracy and completeness is not too good, as described by my colleagues.

When we do get good information on performance, the quality performance letters are relatively abysmal. Anybody who has had a chance to sort of scan through quality compass to sort of see what the scores are, or look at research published on the pre-managed care world by people around this table, know that our starting point in terms of quality is not very good.

Finally, we have Don Berwick telling us that the health industry's motivation to get excited or interested in quality improvement or quality management is fairly depressing. So, that is a brief overview of the landscape.

Now, enter stage left the National Committee on Vital and Health Statistics, making recommendations to HHS on HIPAA regulations.

I think this is, you know, a potentially pivotal role and certainly a formative moment in American health care quality management.

How can you help my purchasers who would like to advance quality and rescue quality from the orphan status that it has been in, I guess, since the beginning of time.

Here is my short but not small list of what we would like on the buy side from you, or at least in terms of your recommendations to the Secretary.

First of all, I would say why not leverage the very excellent multi-lateral thinking that went into volume IV of HEDIS 3.0, a road map for health information systems if we are going to advance quality of care in this country.

It is portrayed through the framework of a health plan world, and it would need some tweaking to pull in care not provided through health insurance plans.

From our perspective, it is a huge advance. It is, though not perfect, it is a long way toward perfect in our eyes, and if you like its time frames, it is a seven-year pathway at least relative to our baseline nirvana, in terms of information systems underlying health care.

It addresses data content, data connectivity and data quality and it assigns specific roles among the plans, the providers, and for the purchasers, because the purchasers have some responsibilities to begin to think it through in terms of huge progress in terms of the content and accuracy and timeliness of enrollment data, among other issues.

If the Secretary of HHS did nothing more than maximally exercise her authority to make this vision a reality within seven years, we think that quality of care would be very well served.

If you don't like that idea, all my other ideas are really just simply attempts to pull out what I think are some of the highest value pieces of what is in here.

They include things that I am sure you have already thought through like: unique patient identifiers; standardized mandatory reporting of lab values and pharmacy data as a condition of participation or payment; extending a model like the UNOS transplant outcomes reporting framework to other hospital treatments that have been associated with high rates of avoidable death and disability; doing the same for the ambulatory management of selected chronic diseases; more efficiently coordinating with JAACO vis-a-vis hospitals and NCQA vis-a-vis management care organizations and physicians organizations, by considering granting them deemed status and specifying future mandated data elements required to feed new quality of care measures as they are adopted by these organizations.

One of the things I see as a participant in HEDIS' committee on performance measurement -- NCQA's committee -- developing these new HEDIS requirements, is it goes something like this.

We look at a lot of potential measures and then we get to what is called the feasibility screen. The feasibility screen, you know, heavily relies on, you know, what data elements are required to feed these quality measures.

Because they are not mandated, you know, a lot of good measures get dropped at that point.

Well, if the people developing quality measures for plans and for hospitals could do so with the confidence that, with some advance warning to the industry and subject to review and approval by you, that they would be able to get, you know, the data elements that they are looking for, we would have a lot more sort of robust set of HEDIS quality measures than we have today.

I liken the process of trying to go from HEDIS 2.5 to HEDIS 3.0 as sort of aspiring to build a skyscraper in a swamp. There is only so far you can push the HCFA 1500 and UB 9200 data set and transform it into meaningful quality measures.

If people engaging in that would have known that they had some kind of deemed status subject to your approval to call for certain data elements, HEDIS 3.0 would have been much more inspiring than, in the end, it was, although I am a great supporter of what we did come up with.

If it is potentially within, you know, the Secretary's domain, let's use just much, much bigger financial incentives to build a business case for the accuracy and the completeness of the data elements that we need for quality measurement.

At the end of the day, poor quality data or incomplete data is analogous to allowing our commercial airline industry to operate with semi-opaque windshields.

We don't allow it. Historically, we have allowed it in the health industry, but maybe it is time to just, you know, say it like it is.

It has huge implications for our ability to assure quality, accountability, and raise those quality scores. This is America. Let's make a business case for it.

Let me close by saying that, you know, I obviously have the luxury of a customer's perspective on a very complex problem.

I can say what I need in blissful ignorance of a lot of what it would take to get there or, you know, the political resistances that would be associated with a fair number of my recommendations.

Although I may be relatively naive on the implementation challenges, I am very aware of how incredibly pivotal your recommendations and the Secretary's role in implementing HIPAA can be in emancipating us from the lowly state of American quality and American quality management and, for those who have looked at it, the mind boggling levels of avoidable suffering and resource waste that such a lowly state continues to generate.

I urge you not to be incremental in making your HIPAA recommendations. Be bold and breathe some life into the concept of quality accountability in the information age. You will certainly have our support as purchasers.

DR. DETMER: Great. I thank each of you as well as our committee for listening through this very good set, but it was a long set, of input. With that on the table, though, let's now get into the discussion part.

DR. STARFIELD: Thank you. This has really been informative. I have a two-part question. I don't care who answers it. The second part depends upon the answer to the first.

Does HEDIS rate plans on the quality of their information systems?

MR. TIERNEY: Currently it does not. We are putting together, and will be releasing for comment, information standards that will be part of our accreditation process.

Those would be -- those standards would be implemented in the year 2000, I think. It gives plans an adequate time, then, to get their information systems in line so that they are prepared for that review in two years.

DR. STARFIELD: It seems to me that the biggest incentive to getting information systems of quality is if a plan is going to be rated on the information systems.

I think we would be interested in seeing what you are doing and perhaps working with you on what you are requiring in terms of rating.

MR. TIERNEY: I can certainly forward those standards.

MS. COLTIN: I was just going to say, in addition to that, by including some of the actual HEDIS measures into the accreditation process, plans that don't have the information systems capability to produce some of these measures don't have the ability to achieve scores for their performance on those measures.

If you don't report it at all, you get a zero. If you do report it, you get scored on what your performance level is.

MR. BLAIR: Not being able to see who testified, I think the last person, I think was Arnold; is that right?

DR. MILSTEIN: Yes.

MR. BLAIR: Arnold, you started to list your recommendations and, of course, you wound up suggesting the road map for HEDIS, version 4.0, and then you wound up indicating the specifics within that, if that wasn't embraced in total.

You started to go down the list and I think your second item was like clinical lab reports.

The third one you mentioned was an outcomes measurement tool of some kind that I had not heard of. Could you tell me what that is and what attributes it has and, you know, why you focused on it.

DR. MILSTEIN: Yes, I had referenced the UNOS transplant outcomes reporting framework.

MR. BLAIR: UNOS?

DR. MILSTEIN: UNOS.

DR. DETMER: United Network for Organ Sharing. It is the network for transplants.

DR. MILSTEIN: For transplants, but it is a nice example of what HHS can make happen. It was, as I understand it, a nice collaboration of transplant surgeons and institutions and health services researchers who basically said, if we want to begin to compare risk adjusted outcomes across institutions doing this high risk surgery, what is the most economical subset of outcomes measures, risk adjustors, and I don't think there were any process measures, although going forward in the future, one would hope that there would be measures of right processes at the same time.

That was the essence of it. The thing that I sort of like about it is that it is, you know, published for publishers and consumers to look at.

So that, if I or somebody in my family needs a transplant in the west, I can go to a data base and see which institution has the most favorable risk adjusted outcome and select accordingly.

It also takes those institutions with poor scores and creates some public or market pressure for them to understand why, and get better.

If it is a problem other than data reporting, and if it is a problem for data reporting, it is certainly a nice incentive for them to fix that.

DR. DETMER: And it is a model that is out there at the moment. Kathleen?

MS. FYFFE: I have a couple of questions for all of you. About five years ago I provided comments to NCQA, Margaret O'Kane and so forth, about the earlier versions of HEDIS.

At that time, I went through all these criteria and I simply had to put an NA next to -- meaning not applicable -- to many of them because they simply did not apply to your traditional indemnity insurance plans.

Now, one question I have is whether that has changed. The other thing I would like to bring up -- and I don't have my estimates here with me today -- but we are not completely in the world of managed care yet, in reality, because roughly 45 percent of the types of transactions and business that go on are not managed care, but are more managed indemnity, point of service.

So, there is still a whole lot of fee for service types of transactions that go on. I am wondering how HEDIS, the current version of HEDIS, might deal with that.

Also, I am wondering, the Pacific Business Group on Health, if you have plans out there that are still purely fee for service, and how HEDIS would be applicable to them.

I have asked a lot of questions and I made a lot of comments, and I pose that to everybody, really.

DR. MILSTEIN: I think a fair statement is that HEDIS is a nice set of measures that embody what you would want to have happen to people, no matter what kind of health insurance plan they were in.

One of the reasons that there has not been a mass movement to apply them to unmanaged plan is that there is no place to go if you are unhappy.

If there is no network, no network medical director, let's say that mammography rates are way south of where they should be, or that a very high percentage of patients who have had heart attacks are not on beta blockers afterwards.

It isn't that those are not very valid measures of quality. It is that there is no locus of accountability to go to, to get those numbers up. So, there has been, you know, less interest in measuring at that level.

When there are units of accountability, you know, progressive purchasers have started to measure.

For example, you know, General Motors has, in some small communities in the midwest where it has very big concentrations of people, it said, we don't have managed care plans, but we have local delivery systems in communities who we can begin to engage on this.

So, they have begun to calculate those HEDIS measures in those communities and then gone to the disorganized, unmanaged delivery systems in those communities to say, what about this. Do you want our factories still operating in your town 20 years from now. Let's fix this.

MR. TIERNEY: There also has been some thought given to moving the unit of measurement to the provider level.

Problems with that unit of measurement is that your denominator size is oftentimes too small to render any type of accurate assessment.

NCQA is very interested in trying to move that unit of measurement throughout the whole system, not just as the managed care health plan level.

DR. NEWACHECK: I would like to thank the panel, too, for some excellent presentation. My question is about consumer surveys.

I think they hold the potential to be very valuable to both consumers and to employers in choosing health plans, but I was very concerned about Kathy's remark earlier about how, in NCQA for HEDIS, we are getting response rates down in the low 40s, in some cases down to 17 percent.

Particularly when we know there is bias in the nonrespondents -- as you pointed out, Kathy -- it seems to me that the potential for misinformation and inaccuracy is very high with that kind of response rates.

We may even be doing a disservice to purchasers or consumers if we are providing bad information.

I am wondering, is NCQA doing anything to improve response rates? Is it willing to invest more, or is the plan willing to invest more, to produce response rates that are more toward the rates that most survey research firms and government agencies would consider to be acceptable; for example, the 70 percent kind of response rate that HCFA is looking for?

MR. TIERNEY: It is a concern of NCQA, remembering that the survey, last year was really the first year that it was field. We did some pilot testing.

The results of that implementation, though, we are taking that information back and trying to improve, do what we can to improve that response rate. We are concerned with the response rate as it stands today.

DR. ELSTEIN: We are also concerned with non- respondents and trying to figure out, basically a survey of non-respondents to find out, is their care "worse," which I think some of the literature may show, or are they satisfied and don't want to answer the survey.

DR. NEWACHECK: But you are at least in the 70 percent range.

DR. ELSTEIN: No, we are not there. We are hoping to get to the 70 percent range.

DR. NEWACHECK: Oh, you are hoping, okay; all right.

DR. ELSTEIN: Actually higher. There are those who say that whether we are cockeyed optimists or not, we think that it can be done.

MS. COLTIN: There were two actions that were taken over the past two days at the CPM that I think should help a great deal.

One is simply moving us closer to a single gold standard survey, so that we don't have everyone trying to implement their pet survey and, therefore, you have so many different surveys floating around out there, that not only can you not compare results, but it contributes to this over-surveying problem because everyone is using their survey.

If you had a single standard survey administered by an objective third party that would satisfy the needs of most users of this sort of information, I think it can help cut down on that over-surveying problem.

We know that that is a major contributor to response rates, because internally we have been doing this for years, and we have been watching the response rates go down as the volume of surveying is going up.

We also have developed systems to cross check our survey populations against populations for other surveys we have sent out, to try to minimize duplication wherever possible.

It is tricky, trying to do that and still have a representative random sample, and yet not keep going after the same people. So, we are worried about that.

I think the CPM's action in endorsing and adopting a converged NCQA member satisfaction survey and consumer assessment of health plan survey at its meeting on Monday is a big step in that direction.

DR. NEWACHECK: CPM?

MS. COLTIN: The Committee on Performance Measurement, which is the NCQA committee responsible for HEDIS.

The other decision that they made was to endorse a change in the survey protocol which would now include telephone follow up.

The experience from the pilots has been that that does increase the response rate by about 20 percent. So, they are hoping to get it up to a 60 percent minimum response rate through that sort of action.

DR. MILSTEIN: Also, you know, Kathy referenced the problem of people being over-surveyed, and that, in turn, driving down the response rate.

That is a problem that can be, you know, very directly affected by, through the HIPAA regulations, building a much more robust administrative data set to measure quality.

I mean, one of the reasons we have a lot of surveys going on is surveys are, you know, trying to essentially substitute for what could be solved in terms of quality measurement and management by a good administrative data system.

We have got surveys -- I mean, I am not even sure that the so-called Healthy Seniors measure would have been approved by the CPM if we would have had great measures of the 40 health care processes that have the most opportunity to impact the health status of the senior population.

If those 40 data elements would have been in our data set, we might not be using Healthy Seniors with all its problems today.

DR. IEZZONI: I wanted to talk about the Healthy Seniors survey, because it is troubling to me. When we talk about CAPS, or assessments of satisfaction, it is very easy for somebody to say, I am dissatisfied and unhappy.

The CAPS survey, frankly, talks about experiences with care and it is very objective. It is not personal.

When you talk about sending out or doing telephone follow ups to survey people about their functional status and their mental health, I think that things become very different.

It is a very different type of survey than asking somebody if they are happy with the care they receive.

I frankly had a privacy concern about this. When our subcommittee on population-specific issues was out in Arizona, we heard a very compelling plaintive testimony from a mother who works for an advocacy group for parents of children with disabilities.

She said that, regardless of whether they are told that their health care services will not be compromised by how they respond to repeated surveys about their child's functioning, that they really do not want to respond honestly about their child's functioning.

They feel that they are potentially at risk if they state honestly the level of disability of their child.

So, I just am very concerned about this type of survey just kind of stepping over the line into private things about people and their lives, that it is troubling to me that this would be coming through the mail or over the telephone from some sort of payer, or even when it is employer. Can people comment on that?

DR. ELSTEIN: I will try to answer the last part of that, but I am not sure -- I share your concern and I was nervous when you said Arizona, because this Healthy Seniors measure was fairly thoroughly pilot tested in Arizona on the Medicare-aged population, and pretty successfully.

The SF36 itself has been tested a lot, and this is a modification of it.

I am concerned about the confidentiality issue, and therefore, the possible non-accurate -- either non-responders, or the non-accurate respondents, which is actually more serious.

The plans will not know which people are being given the surveys, either CAPS or Healthy Seniors. Now, that doesn't give the person his or herself any assurance. But I mean, I hope that it is a little bit of assurance to this group.

If we want to know functional status, I think there is a general concurrence that knowledge of functional status is important. We have to go with some sort of survey approach.

DR. IEZZONI: What you are talking about is SF36 scores. That is not necessarily -- that is one way to measure functional status. So, I think you need to be clear about that.

DR. ELSTEIN: I understand that, and this tool, as I sort of implied, was never perfect to begin with, and we have had technical panels working quite thoroughly on it, and I would be happy to get the input of this committee also, and work with the people who are more involved with this measure than I am, and with NCQA on trying to address some of Lisa's concerns.

DR. IEZZONI: Can I just follow up on that? One of the trendy words -- I don't usually like trendy words but there is a trendy word that I actually like. That trendy word is actionable.

A quality measure that is assessed is actionable, that you know what to do based on the information in the quality measure.

For example, if we see a mammogram rate for women over 50 of 30 percent, we know what we want to do to improve that.

It is absolutely not clear to me at all how change in SF36 score over two years will tell health plans anything about what to do.

DR. MILSTEIN: I, first of all, stand by my earlier comment that we might not need the SF36 if we fixed the problem of more robust administrative data, and that is my preferred solution.

I have to say that the use of that method of measuring quality has shown me some very interesting, unanticipated favorable consequences, that really do fall squarely in the actionability realm.

It goes something like that. We, at PBGH, have taken that same approach to measuring quality and implemented it for our pre-senior groups, our 50 to 70-year-olds, using our large physician groups along the west coast as our units of analysis rather than the health plans.

In California, all the health plans -- most of them -- are contracting to the same medical groups, so the quality variation is unlikely to be at the health plan level.

You know, the dialogue over actionability has really, I think, opened up some real expansions in physician -- especially in medical directors' groups' thinking over what it means to be responsible for longitudinal health status.

Here is a, you know, snatch of the dialogue. It dawns on them, not necessarily before they sign up for this exercise, but at some point along the way, that the are being held accountable for risk adjusted change in functional status, or over time.

Some of them called me up and basically, as they began to really understand this, say, this is completely unfair.

They say, we are mere physicians. What we do is we make diagnoses and we write prescriptions. That is our job in society.

One of them actually said to me -- and this really should be recorded for the annals of medical history -- we are physicians. We are not actually in the health business.

Great comment, but what has been terrific has been to basically engage them around this problem. You now have responsibility for longitudinal management of the health status of this population. What do you do.

One of the things it sort of lead them to is that they have to do more than simply, you know, deliver medical technical services correctly.

They have to begin to so-called get into the health business and think about modifying health behavior, think about modifying environmental vectors that affect health status, such as beginning to fall proof the homes of dizzy seniors, et cetera.

It has been a fabulous tool, you know, for me operating from the purchaser side of the market, to begin to really stimulate tremendous thinking and dialogue on their side, which they would have in no way, I think, been particularly interested in, had they not realized that, you know, beginning about eight months from now, their names and their rankings on their ability to maintain risk adjusted health status are going to be published in the newspapers and put up on the internet, et cetera.

DR. IEZZONI: I am very excited to hear that. I think that is a thrilling outcome. But are you going to be willing to pay for the following: that the doctors are going to need more time to see patients than 12 minutes, because it is going to take more time to talk to the patients about their functioning.

Are you also willing to pay for the grab bars in the shower, widening the doorway in the house, the kinds of things that typically health plans have not been willing to pay for.

Might you also be willing to pay for, for example, motorized scooters to allow people who can't walk to be able to get out of their houses and go to work.

It is not something that, in my experience, health plans have typically been willing to pay for.

DR. MILSTEIN: I don't know the answer to that question, but I do know that this dialogue is going to force delivery systems to sort of get out of the mind set that the only thing that they are inputting into the production function is the number of minutes of physician time.

I mean, as we have really thought it through on the purchaser side, what we want to use is this crisis in raised expectations and reduced money to force the long-overdue reengineering of what it means to deliver health care.

I don't think I want physicians going from 12 minutes to 15 minutes. What I want them to do is to say, using myself alone at my high hourly rate as the sum total of what happens when a patient has a problem makes no sense at all.

I need telephone care delivery systems. I need to optimally leverage technology and nurse practitioners and paraprofessionals in what goes on.

I need to make some progress on the long-ignored frontier of helping patients to be the best self-managers they can be. That is what I want to engage delivery systems around. I won't pay for another three minutes of time per visit.

DR. DETMER: One would hope that if, in fact, that kind of idea really caught on, it does in fact allow for pressure to come to bear also on the insurers to look at this beyond their side of it, too.

Everybody has been, frankly, in a medical procedurist kind of mind set, disproportionately; I think that is accurate.

MR. BLAIR: Yesterday we had some testimony about the idea that maybe to begin to address quality there needed to be public and private forums.

Are any of you able to help us understand to what degree you coordinate or work with the Foundation for Accountability and the Joint Commission, with respect to its quality indicators, to try to see if there could be some more global or common approaches to quality, or other organizations?

DR. ELSTEIN: HCFA, we are on the board of the Foundation for Accountability, or at least a liaison position, and we also give them a fair amount of money.

We are very interested and involved with the FAC, and the development of measures and some of their other activities.

We also have staff -- Steve Jencks in particular -- who work closely with the Joint Commission on some of their measurement systems.

I think one of the legacies of Bruce Vladik, when he was administrator of HCFA, was to move much more closely to this public/private forum and collaboration, which we do in a number of areas, of which HEDIS is also an obvious example.

I mean, we don't see FAC and HEDIS as rivals. We see them as complementary approaches to getting quality measures.

MS. COLTIN: I guess I would also add that HCFA and some of the other large purchasers that participate in both initiatives have actually been instrumental in bringing NCQA and FAC together to work together jointly on measurement sets.

I know that in the area of pediatric measures, there is a collaborative effort between NCQA and FAC to agree on a common set of performance measures, and they are also involved in the diabetes work group, or measures advisory panel, as they are called at NCQA.

A lot of that collaboration and cooperation has actually been fostered by the purchasers who are involved with both organizations and are saying, we really need to work together here on a common agenda.

I think that is happening in the private sector with both public and private purchaser support.

MS. WARD: I represent a state agency that regulates professional facilities. Can you imagine -- have you ever thought whether state agencies who certify and license and perhaps other kinds of agencies -- federal ones that do that -- ought to be adding into their regulatory oversight data standards and information technology capacity?

Would that move, or would that just add another regulatory threat to an already-threatened industry?

DR. MILSTEIN: My feeling is that the problem is big. Whenever you have a big problem, multiple modes of solution are indicated.

We don't let, you know, commercial airline pilots fly without being able to, you know, see out the windshield. I think my purchasers would be very supportive of state regulations that said that if you want a license in this state, you have got to submit the following data elements with the following degree of accuracy as a condition of having a license in this state.

MR. TIERNEY: I think that would go a long way. It is a fairly complex problem, as you say. The only caution that I would have is with the standardization. We want to try to avoid different states having different regulations for data completeness and accuracy, things of that nature.

To the extent that we can standardize that, I think it goes a long way to helping move the effort forward.

DR. HARDING: Just a question for Kathy and maybe some of the others of you. One of the first things you said was, getting the data, completeness, was a problem, because sometimes there are carve-outs, and the carve-outs are mental health, prescription programs and so forth.

Why can't you get that? Why can't that be part of the carve-out contract, to have that data coming in. Why does it become a black hole.

In the Medicaid, for mental health that we were talking about, if there is a carve-out for Medicaid managed care, again, it is a black hole. We don't know. We don't have any feedback from that. We give them an amount of money and then we assume a lot.

MS. COLTIN: Well, there are two different levels at which carve-outs can occur. Carve-outs can occur at the level of the purchaser who says, I am only going to buy the following services from Harvard Pilgrim Health Care, and the benefit package I am going to buy does not include pharmacy coverage and does not include mental health coverage.

I am going to go to this pharmacy benefit manager and I am going to buy pharmacy coverage through them, and I am going to buy mental health services through this managed behavioral health care service.

Therefore, the health plan doesn't have any relationship with these other organizations, has no fiduciary responsibility to the patient with regard to the services covered under those.

From a privacy perspective, I would think it has no right to that information.

Carve-outs can also occur at the health plan level. So, in fact, it can be in the package of covered services that the purchaser buys from the health plan.

The health plan may, in turn, decide to carve out mental health and to essentially delegate or subcontract that to a managed behavioral health care organization.

In that case, they still retain the responsibility and accountability for the services provided by that organization.

They do have a fiduciary responsibility and they do have a right to the information. It is the only way that they can provide oversight to make sure that that organization isn't unnecessarily withholding services or acting fraudulently or not providing good quality care.

DR. DETMER: Lisa, George, and then I want to weigh in as well.

DR. IEZZONI: Okay, Kathy and the others, about the accuracy of ICD-IX diagnostic information, especially on things like laboratory claims and radiology and other study claims, Kathy you were making the point that part of the problem has to do with payer rules around reimbursement.

Let me just give an example from our own research. I recently completed a study using the National Claims History File from HCFA, where I looked at diagnoses on laboratory bills compared to the diagnosis from the bill associated with the doctor's visit.

You would think it was two entirely different people. The problem is, however, that HCFA, to agree to pay for certain laboratory tests and certain radiology studies and so on, requires -- there is a list of diagnoses.

They will only pay for the service if it has one of the following listed diagnoses. People in our practice have these lists of diagnoses, and so they list those diagnoses so they can be paid.

Has there been any effort by some of the quality measurement community to work more collaboratively with some of the big payers like HCFA, to think through the implications of this kind of situation for the quality of the data that you are also increasingly using for quality measurement?

MS. COLTIN: I think certainly there have been complaints raised, and the issues have been repeatedly brought up.

Most of us have very little leverage or influence over payer decisions about what is and isn't a covered service under what circumstances.

What we have done instead is to try to understand those kinds of problems in the data and work around them.

I can tell you, in the first year that we measured the rate of diabetic eye exams, we were picking up the diabetes code off laboratory claims.

Every time a patient had a blood glucose ordered, it was saying diabetes. In fact, we had a lot of people in the denominator for whom eye exams were not indicated because, in fact, the patients didn't even have diabetes.

It took us two iterations of HEDIS to refine the algorithm for identifying who was a diabetic based on claims data, and we do not use laboratory claims to make that determination any longer.

We use outpatient claims and, in fact, require there to be two outpatient claims with the diagnosis. We are afraid that, in fact, doctors may be putting it on the outpatient claim the first time to justify a rule-out decision or some other sort of procedure.

If it is repeated a second time, we are accepting that two occurrences indicate the patient most likely does have the diagnosis.

It is those sorts of trial and error approaches to building valid measures that have been used to try to work around some of these data quality problems, rather than to solve them.

DR. IEZZONI: Just in quick follow up, I mean, this points out the inaccuracy of the data that HCFA is, in fact, getting for paying the claims for these services.

It is obvious to every doctor that you want to measure blood glucose, even if the patient doesn't necessarily have diabetes. Ruling it out is a very appropriate clinical thing to do.

I guess I don't want to get into kind of shadow of the E&M guideline, the discussion that we had yesterday about the need to validate the accuracy of the diagnostic coding on laboratory and radiology claims, but it is just a kind of curious situation.

DR. DETMER: Bob, did you want to comment to that?

MR. MAYES: Yes, just briefly, although I did have another comment.

DR. DETMER: You talk.

MR. MAYES: Those of us at HCFA in the Office of Clinical Standards and Quality, have long recognized the severe limitations of using administrative data for doing quality types of activities.

We spend an extraordinary amount of money in getting our data from other ways, mainly through chart abstractions.

We are quite aware of it within HCFA and we really don't see any solution from the claims or administrative side.

We think we have probably gone as far down that road, because they are fundamentally for a different business purpose.

DR. DETMER: I think really the computer based record is the only way we will ultimately get to that one.

MR. VAN AMBURG: This question is primarily for Kathy, but you can all chime in on it. We have heard a whole list of problems of data quality and completeness and reasons why that.

What are two or three key steps that could be taken to improve completeness of quality for the maximum return on your investment, and how much do you think that would cost as a percentage of your operating budget for information system?

DR. DETMER: The old 80/20 rule.

MS. COLTIN: The two things that I would probably push for more -- one of which I am very optimistic about -- is the standardization issue, and movement toward EDI.

I know in our organization, that will greatly increase the range of data fields. Despite the limitations in the HCFA 1500 and UB 92, we don't even have all those data fields many times.

When you are talking about manual data entry, the cost of entering every data element, if it is not necessary for payment, even though it may be useful for quality assessment, is a trade off that doesn't always go in the direction we would like.

I think the standardization and movement toward EDI would be one thing that clearly will help.

DR. DETMER: What all do you encompass in the standardization definition?

MS. COLTIN: I think clearly standard code sets, standard identifiers, standard data definitions, standard rules for using particular kinds of codes or for recording particular kinds of data elements.

My pet peeve -- and this committee has heard me say it before -- has to do with things like the service dates around a global code, the prenatal being my prime example.

It shouldn't just be the admission and discharge date. If the code represents prenatal and post-partum care, the dates should be the beginning and the end of the episode.

So, there are things like that, that would go a long way. The other thing --

DR. DETMER: Before you answer the second one, would you include things like timeliness standards as well, as part of this?

MS. COLTIN: Sure. Well, I think EDI will play a big role in improving timeliness as well.

DR. DETMER: Okay, and accuracy? Is there a way you can have a standard for accuracy?

MS. COLTIN: I think that accuracy is going to continue to depend a lot on payment incentives, and I am less optimistic about our ability to affect that successfully.

I am not giving up entirely, but I am far more pessimistic about that, because no matter what we try, there will be a way around it.

You feel like you are kind of going through a maze and running into walls all the time.

DR. DETMER: Do you call severity adjustment methods a standard as well?

MS. COLTIN: Severity adjustment or risk adjustments?

DR. DETMER: Well, risk adjustment. I am just saying there are other kinds of things.

MS. COLTIN: Well, that depends a lot on accuracy of coding, and so, because that depends so much on payment mechanisms, I am not as optimistic. I think it is subject to gaming.

You can improve completeness of coding by using case mix, but whether in fact you then, on the opposite end, create incentives to up-code or overstate a patient's case mix or severity, I don't know. You are going to have to balance.

DR. DETMER: Yes, whatever. What is your second one?

MS. COLTIN: My second one is feeding back information. If you have good information systems and you are able to actually give the information -- use the information and give it back to providers in a way that is useful for them and actionable, they will start to see the value in the data, and I believe they will give you better data.

MR. VAN AMBURG: What about the percentage of budget it would cost to make all these conversions to a standard.

DR. DETMER: Oh, yes, sorry. We wanted a price tag.

MS. COLTIN: The cost? I think HCFA has actually gotten some better estimates on that than I could provide, in terms of looking at some of the costs of implementing the HIPAA standards.

MR. MOORE: It is costing us more money than we have, and we have more money than all of you, to tell you the truth.

I think on the surveys, we are up -- do you have the estimate on what it is costing to do those surveys to gather --

DR. ELSTEIN: We are paying for the CAPS survey, the plans are going to pay about the Healthy Seniors. I think what it was referring to is --

MR. MOORE: To clean up the data, it is in the billions for all of us.

DR. ELSTEIN: Just to sort of build on what Kathy had said about the payment system, as bad as the coding is now, it was much worse then.

Prior to HCFA's release of the health and mortality information -- and five or six years ago I spoke before this group in one of our different lives about that -- no matter what you say negatively about that, it did give the hospitals incentives to report better data in various areas.

I think when the Balanced Budget Act implementation on consumer information on HEDIS is fully implemented as opposed to this time, the plans will obviously have a better incentive to report better data.

MR. MAYES: I just wanted to make a quick comment. I think there is an area that we need to be focusing a little more on.

From a systems architect perspective, there are some really fundamental differences between an administrative system and a clinically-oriented data system.

Administrative data systems are basically very bounded systems. In our world, it happens to be founded by the 1500 and the UB92, which may have a lot of elements, but they are still quite bounded.

Your approach, and the approach that we discuss often here, which is an element representation type approach to standardization is perhaps, although not easy, feasible within that kind of bounded system.

You can actually get down to the representational level and start talking, as we have under the HIPAA standards, of defining definitions, et cetera, et cetera.

When we move to what, in the clinical world, is a very much -- it is not infinite, but it comes close when measured against administrative system -- the potential for the number of elements, the way that you would look at them, how you are going to retrieve the information, with the lack of any kind of discussion on broader national health information models, or data architectures at the national level, I think that we will find it very, very difficult to reach a meaningful consensus and standardization.

The hearings on Monday, just on post-acute care, I think, begin to point out some of the issues.

Implementations, like politics, are local and, in Peter White's terms -- from Australia -- there are justifiable prejudices, why things are different at the implementation level.

The more diverse the system that you are trying to support, the more these justifiable prejudices come into play.

I would really urge us, at some point at least, to begin to try to look at, from a little higher abstraction level, how can we actually begin to define what do we mean by national health data.

Yes, eventually it has to be translated down to element level. But I think we need to have at least some discussion of a broader framework, in order to figure out where it is important to focus.

Otherwise, we get this A to zed type of thing and it is very difficult to reach consensus with that. So, I just throw that in.

MS. GREENBERG: I wanted to get back, just for a minute, to Lisa's point about what she found in her data analysis.

I think you tied it in somewhat with yesterday's discussions. I do think that physicians and others are getting a mixed message out there.

I actually am not -- without wanting to pick on anyone, any particular payer -- but I think that there are things that, for example, HCFA potentially could do, or that there is this mixed message.

The outpatient coding guidelines actually say that you are not supposed to code a diagnosis as if -- you are only supposed to code to the level of specificity that you know.

Those are the guidelines that HCFA participated in developing and, as far as I know, have actually promulgated for outpatient coding.

Yet, clearly physicians, perhaps from their carriers or at some other level, are getting the message that you can't get this paid for unless you have this diagnosis.

So, there is no way to enforce a guideline in which the payment incentives are directly opposed to it. I think that we have to look at this, because one thing the HIPAA process can do, and that they are hoping to do through the standards that will be promulgated for the transactions is to make these guidelines, to agree on uniform guidelines, that should be applied by all payers.

That doesn't mean that your coverage is the same, but at least the guidelines are used uniformly.

However, you know, that will be undermined also if all the payment incentives are not there, or are absolutely in the other direction.

I mean, I am intrigued by Dr. Milstein's suggestion that we could do more with administrative data. It may be that administrative data are hopeless from this point of view and we need to, you know, really need to give up on it.

I guess I haven't yet, and I do see this as an area where we can make an impact, even at the first level of the HIPAA standards, but they need support.

DR. MILSTEIN: Don, your question about what is the biggest opportunity for an incremental gain, this not an area where I have a technical opinion. I rely on people from RAND and Medical Outcomes Trust, and there are other sources, in some cases, around the table for this.

What I am told over and over again is that if we had accurate HCFA 1500 and UB 92 data, and we had accurate and complete lab value data and the typical PBM pharmaceutical data base, that we would be able to create, both on the ambulatory and on the inpatient side, a very substantial increase in our ability to measure quality.

That is their notion of the sweet spot or the 80 percent of the gain for the least amount of pain.

MR. COLTIN: I mean, clearly the other strategy that would move us just leaps forward in our ability is the computerized patient record.

I included you somewhat under this notion of standards. Even in an organization that has access to three different computerized record systems, the lack of standards in that area really can hamper us.

In one system, we have totally unstructured free text; in one we have structured free text. The ability to search for kinds of situations varies greatly because of that fact.

The coding systems are different. There are still all kinds of inconsistencies because of the lack of standards that make it difficult to really leverage some of the benefits of having this type of computerized data.

I can tell you that in the largest part of our delivery system that uses the clinical record, the accuracy of the claims data for that part of our organization -- the administrative data, not the record data -- is much superior to any other part of the organization because it is derivative of the computerized patient record.

It is more complete, it is far more accurate, it is much more timely. In our experience -- we have settings where we are using an encounter system. So, you have got a paper medical record, you have got an encounter.

Any time you are asking a physician to record the same information multiple times after the first, more important, form of documentation, the others start to suffer.

What is absolutely necessary and important to record goes into the paper record. What goes on the referral authorization form, what goes on the encounter form is generally far less complete and often more truncated or summarized than what is in the medical record.

When these other systems are fed by the medical record you don't have to endure those kinds of compromises.

DR. DETMER: We have three people who want to speak and I weighed in earlier, but didn't say anything.

Because we are going to run out of time pretty shortly, first of all, I want to commend you and the group that worked on that document.

I don't think the committee has it, actually. We are going to get copies for everybody, because as far as I am concerned it is really a superior document.

In fact, what I would like to see us do as a committee it take your suggestion and go through that document from the perspective of how it would relate to our various committees. I think there is terrific good sense in that. I just want to get that on the table.

I think the other thing that I am hearing -- because I agree with you. I think I am frustrated beyond belief at how quality, as the data mounts, remains an orphan, in a sense.

You know, I tried to dismiss it by Herbert Simon's quotes that humans are satisfisers not optimizers. There are aspects in our life where, in fact -- you mentioned the airline industry and so forth -- where, in fact, we don't accept those kinds of standards.

Now, maybe it is because it is 200 people at a clip, if something goes down, and it is so salient because it is so visible and the media see it all the time, whereas the separate seats of a 747 every other day doesn't get noted.

I think the opportunity that I think we have got to try to move to standards for standards on some of this thing, because I do believe that the aggregate wisdom growing out of 30 years of quality research, which was started really from a very small set of inputs originally is now really quite an emerging science.

It is still emerging, but it is an emerging science. I think that, you know, we really haven't combed through that as much as I think we can, to leverage, really, where we go.

Obviously, I think what we really are talking about is how we get a value-driven health care system, because we are never going to have enough money, and how do you, in fact, try to have a healthy society.

I mean, health is a value. It is a practical value. It is a practical value; it is not just an economic value. It is just a good thing to have. A society that is healthier is more secure, it is more productive, and so forth.

It seems to me that also speaks to the underinsured and uninsured as well. But in any event, I really have appreciated this.

I have one question because one of the things we are also struggling with -- this committee -- is the issue of the unique identifier for patients and how to approach that.

Let's assume for a minute -- which is a huge assumption, but not totally huge because we do have a kick in clause in the HIPAA legislation on privacy that we will, in fact, be seeing if the Congress doesn't pass something sooner.

I would just be interested, from the perspective of quality assessment, do you have a view of what kind of an identifier for patients that would be, in fact, your choice if you were the czar of the week and got to essentially mandate that.

MS. COLTIN: I can tell you about the process because we do have that system now. We have looked at some of the results of the process.

What we do is, using a matching algorithm that looks at the identifiers that we do have, try to identify all the records that belong to the same individual in our organization.

We have 17,000 instances where different dates of birth or whatever ended up getting assigned to the same plan rec, which is the variable we create, the unique identifier we create.

We had 20,000-something cases of other data fields that looked unique but they got different plan recs. I mean, it is not perfect.

We tried, optimizing particular data fields to do that.

DR. DETMER: Nothing is going to be perfect. What would you prefer.

MS. COLTIN: What would I prefer?

DR. DETMER: Yes.

MS. COLTIN: I would prefer something that got assigned probably at birth, and you know, I think that at whatever point you start, obviously you are going to have to do something with the people who are already here.

I think I would prefer to see something started from scratch.

MR. TIERNEY: Yes, I think that is where you want to address the issue.

DR. DETMER: Next, we have Kathleen, Barbara and Simon and then we will have to wrap it up.

MS. FYFFE: Two questions. What do you all think about the Social Security number as the personal identifier?

MS. COLTIN: I don't like it.

DR. MILSTEIN: I am not qualified to answer.

DR. ELSTEIN: I am not qualified either, but there have been major studies at the Social Security Administration on that issue. I understand they found some big problems with it.

MS. FYFFE: Also, one of you mentioned capital requirements for systems development. Do you have any more information on that, that you can share with us?

I am very concerned here, folks. I mean, I spent a decade planning and designing automated information systems. They are not cheap.

As Bob has pointed out, I mean, I had to come up with an estimate one time for changing just one data element in all the computer systems that I estimate in the country that had to do with processing claims. For that one data element I came up with a figure of, I think it was $3 billion.

I am really curious. I would like to see something, if you can share it with us, about capital requirements for all of this.

MR. TIERNEY: I don't necessarily have estimates on capital requirements, and it is a complicated issue. I think one of the factors that has been playing into this is that there hasn't been historically a justification for that kind of investment.

Second is, it is going to involve a lot of collaboration and standardization to get to that point. When you start thinking about the different levels of delivery system and the information infrastructure at each level, I mean, provider offices oftentimes -- we are finding areas, when I was with health plans, where they still had rotary phones, never mind a --

MS. FYFFE: Oh, sure. One of my doctors still uses a manual typewriter. I mean, he won't let anybody take it away from him.

MR. TIERNEY: So, we are going from a continuum of that type of environment all the way through to billions and billions of dollars that are invested at HCFA and government information systems.

No one can do it by themselves. It is going to take a lot of coordination and collaboration to come up with standards and give people an incentive, so that if they invest in those systems, so that they are rewarded in some way.

DR. DETMER: Kathleen, of course at the same time, it is true that people are investing a lot of money in this. That we all know.

MS. FYFFE: Oh, sure.

DR. STARFIELD: Although no change is easy, it seems to me that one of the changes that we could make relatively easily is to have standard qualifiers for diagnoses.

You know, payers wouldn't require a diagnosis of diabetes if they had an option of a rule-out diabetes. I am sure you would pay a blood glucose for that.

What do you think about that as a relatively easy option, if we could recommend that?

DR. MILSTEIN: I would be very supportive.

DR. DETMER: Simon, last word. Unless our group wants also -- I was going to give them a shot.

DR. COHN: Okay, well, good. I was actually just going to re-emphasize, Arnie, your comments about laboratory and pharmacy data and the importance of it.

Kathy, as part of another HMO, I don't have quite the same -- I have concerns about data quality, but I think that there are ways around them.

I think that there is a lot of literature that shows good pharmacy and good lab data as an adjunct to the claims record can get you a real long way in terms of data quality and all of this, and I think that is what you are really talking about.

The one point I would make -- and it is just an observation -- is that currently where the committee sees all of that data being potentially available is in the area of claim attachments, which is an area that the committee is currently discussing and is an activity ongoing by X12 and HL7 as we speak.

So, it may be an area that the quality groups such as you represent may want to be involved in, as these get developed and specified.

It is very important data. We see it as probably better, in many ways, than diagnoses, from a lot of the work we are doing within an HMO.

It will be a way that I think eventually everybody will be getting the data, or at least moving it back and forth. Anyway, a thought and a recommendation.

DR. DETMER: Yes, because I think our focus is more on administrative simplification. But I think if we don't make these systems feed quality and bootstrap toward quality while we are doing it, I think that continued oversight and looking at our work as we do that is going to be very, very useful.

I would like to give each of you a chance to weigh in for your wrap-up comments, and then we will close.

MS. COLTIN: I think Simon makes a good point. To me, some of the most valuable data that we need for quality measurement and improvement is laboratory and pharmacy data.

Laboratory data is a problem because, based on a survey NCQA did, only about five percent of plans have it. So, it is not that it isn't good data. It is an access to the data issue.

So, laboratory results information is what I am talking about, not simply the CPT code on a bill that says the test was done.

Pharmacy data, I think that the quality of the pharmacy data is far more variable than we would like to think.

It is not that the drug information is inaccurate. The problem that we are observing is that it is attributed to the wrong member of the family, often.

So, if the mother is picking up the asthma medications for Johnny and doesn't have Johnny's card with her, she gives CVS or whatever pharmacy she is at her card, and it gets entered into the data base under her card, because it will be paid. She is a member. She is covered.

Then it looks like she is an asthmatic when, in fact, it is her child. We have found that to be a problem in some of the health plans in New England.

We haven't observed it very much in our plan because of some of the protections we have put in, but it has been reported to us as a fairly serious problem in some of the other plans.

DR. DETMER: Before I hear from the other three of you, we are delighted to have Ed Sondik here this morning. He is the head of the National Center for Health Statistics. Do you want to weigh in on this before we close up?

MR. SONDIK: Thanks. This is really fascinating discussion. I guess I had a couple of questions. I was glad to hear some points about incentives and audits and all of this.

I like the idea of a scorecard or a scorecard on data, if you will, and the image of people flying semi-blind is a difficult image, and very compelling.

I guess the question is whether you have any thoughts about auditing and incentives, and some of those points that have come up.

The other point, I guess, relates to health and the outcome of all of this. I get frustrated, I guess, in part when focus tends to be a lot on process and all of these process variables when down the line we are all interested in, as you said, Don, in improving health, health status.

I wonder if you have any thoughts about how this can be directed -- all of these efforts can be directed -- more toward aiming at the ultimate measures, if you will, the ultimate health status measures, and whether we are on the main line or toward that, or whether we seem to be on a lot of spurs, if you will, looking at a lot of issues that don't necessarily result in improved health status.

DR. DETMER: Would you like to respond to that?

DR. ELSTEIN: On the second question, I think we are on the main line. I think there is lots of work going on out there, around this table and outside there, on developing good outcomes measures.

I would like to have said three or four years ago we are at that point now. I will say now we are not at that point yet, and won't be for the next couple of years at least, but at least if we can find some good or better process measures, we are in the right direction.

I think all of our goal is to have some good outcomes measures and we are working hard to try to get to that point.

DR. DETMER: Paul, while you have the floor, do you want to make any closing comments, and we will just do that?

DR. ELSTEIN: My only closing comment -- I have to say two. One is, I appreciate the chance to be here. I think HCFA needs to hear a lot of the comments that were made around the table.

Secondly, I think there is a trade off between confidentiality and quality. I think everyone here needs to be aware of that.

Particularly on the HEDIS measure of the ambulatory visit after mental hospitalization, the data we have collected on that, for the most part, is not very good, and part of it is numerator issues.

Part of it is clearly confidentiality issues. We have state laws we have to work on. So, I am not saying one is more important than the other, but we have to realize there are trade offs there.

DR. DETMER: Jim, would you like to comment on Ed's point or question and then close up?

MR. TIERNEY: Sure, thank you. I think both process and outcome measures are important, that we don't want to overlook one for the other.

I think that we have been lagging in our ability to develop valid outcome measures, and I think that is partly being addressed by this group and the concerns that are being raised.

I think that the industry has been very inventive in trying to measure quality with the tools that we have available to us.

I think we have also been very sensitive to make sure that whatever tool set we have put out there has been tested and is implementable and means something in the final day.

To the extent that we can, through the work that this committee is doing, make some improvements in the types of data and the quality of the data that we are capturing and make available, then I think we go a long way in trying to improve our ability to measure outcomes which, as I said, I think has been lacking.

I would like to thank the committee for allowing us to participate.

DR. DETMER: Thank you for your contribution. Arnie?

DR. MILSTEIN: Responding to Ed's point, I think what we want is sort of the most efficient measurement path to outcomes.

Whether that is in measuring outcomes directly or measuring those processes that have been well correlated with outcomes, I think I am neutral toward it. What we want is the most efficient combination of measures that capture outcome.

In terms of my closing comment, I guess at the end of the day what I am mostly going to be focusing on, as I watch the progress of this committee and, ultimately, the Secretary's recommendations on HIPAA, is really going to be focused on, again, something that Ed referenced which is kind of the teeth side of things.

I have complete confidence that this process will result in good recommendations on content, but you know, we have a long history of prior recommendations on what data ought to be provided and not being, you know, accorded very much effort by the health industry.

So, let's take that learning and, this time around, you know, risk being accused of being over-regulatory, over-punitive, and let's get some teeth behind completeness or accuracy or whatever you do recommend ought to be collected.

DR. DETMER: Good.

MS. COLTIN: I would say, as we increasingly focus on health and improving health and the outcome measures are really intended to evaluate whether we are succeeding, I think the point that Arnie made about the need to engage a wider variety of partners in the care process in order to accomplish that is a very important one from a data perspective as well.

If it is the case manager going in and doing the home assessment or more services being delivered through home care agencies and such, we really have to pay a lot of attention to how we capture information that is provided by some perhaps non-traditional care givers, other traditional care givers, in settings where data are not as closely monitored or of the same caliber, in terms of quality, completeness and comprehensiveness. We really need to attend to that.

I think that we also need to recognize that there are an awful lot of outcomes out there that only the patient can tell us about.

There are relatively few outcomes that are objective clinical outcomes that can be gotten even from a computerized patient record, and we are going to have to go to patients for this kind of information.

Patients are increasingly wary of who is approaching them and asking them for this information because they are not used to all these non-traditional partners in their care, whether it is a disease management vendor or a case manager or whatever.

Their concept is still the physician, and I think we are going to need to do some education of patients if we are really going to improve health.

We have got a broader panel of caregivers that are involved in that process, and it is appropriate for them to share information and to work with the patients to try to accomplish that.

DR. DETMER: We are actually beyond our time a little. This has been kind of a marathon session, but I think it has been very interesting and very useful.

Kathy and I had a conversation at the end of the day yesterday, walking over to Union Station, about some of the, it seemed to me, sort of standards or processes standards that could come out or emerge from some of the work she is doing.

I would like for us at some point to get a chance to hear a little of that.

I thank you for taking the leadership on this thing. I really hope that you continue to speak out because I think it is very useful, and good luck with those twins, the NCQA work, and I guess teaming a 900-pound gorilla or leading it, clearly following in the direction of quality on this will be useful as well.

Why don't we adjourn for an hour and then we will come back and get back to work.

MS. GREENBERG: Wait; an hour?

DR. DETMER: Make it -- it is right now by my clock noon, so back at 1:00.

(Whereupon, at 12:00 noon, the meeting was recessed, to reconvene at 1:00 p.m., that same day.)


Afternoon Session