[This Transcript is Unedited]

DEPARTMENT OF HEALTH AND HUMAN SERVICES

NATIONAL COMMITTEE ON VITAL AND HEALTH STATISTICS

Workgroup on Quality

September 25, 2002

Quality Hotel
1200 North Courthouse Road
Arlington, Virginia 22201

Proceedings By:
CASET Associates, Ltd.
10201 Lee Highway, Suite 160
Fairfax, Virginia 22030
(703) 352-0091

P R O C E E D I N G S [8:09 a.m.]

MS. COLTIN: This was to be a review of draft recommendations on the National Healthcare Quality Report and really wasn't so much a review as developing draft recommendations.

Then the second was to review some of the common issues and themes that Susan was able to extract from all of the panels' presentations and testimony that we have taken over the past four years now, as well as recommendations that had been offered by the panel for us to consider before going forward with the quality report.

We felt this was really the best meeting for us to try to formulate draft recommendations so that in the November -- at the November meeting, we can have a draft report that includes draft recommendations. The recommendations really have to be developed much more through a consensual process; whereas, the report, we can put a draft together and then everyone can have a shot at edits.

Since the people from AHRQ are not here yet, I am just going to suggest that we look at these two items and start with reviewing the common issues and themes that we heard regarding quality measurement. You should have a document in your packet that looks like this. It is a themes in testimony and NCVHS report, data issues in measuring quality.

It is really primarily the first four pages that we are going to be dealing with. The remainder are actually sort of the back up to the first four pages. They give you the specific meaning at which these issues were raised and some context about who else presented at both meetings and whether those issues were discussed at the time. Whereas, in the front the issues are organized primarily around data sources for measures and what some of the problems were with those data sources. The first section has some general issues and then after that, you will see there are obstacles related to administrative data, surveys, medical records and then a broad category of other obstacles.

Then after that, there are some very specific data issues about data elements that might be needed, for example, for problems with existing data elements. So, I want to give people -- if they haven't seen this, it was e-mailed out in advance. I don't know if everyone got it. I think what I would like to do is allow you to take a few minutes to look it over and first of all if there are any comments here that you would like to ask a question about or seek some clarification on, let's do that.

Then my suggested approach is to identify what are the priority areas here. Generally in our report when we develop a set of recommendations, it is a very finite number of recommendations. I think somewhere in the 10 to 15 range is about as high as we have gone most of the time. We certainly can't make a recommendation based on every issue that is raised here. So, some of them may group together and we can make one recommendation.

Others may be comments that were offered by only one presenter and where we really either don't think that it is a particularly important issue or that it was raised so long ago and we are aware of things that have occurred in the meantime that would no longer make it an issue. Okay?

So, what I really want to do is get a sense of priority. So, take a few minutes.

[Pause.]

Okay. Before we get into discussing these any further, I would like to go around the table and introduce ourselves. I think it is important for the transcription to hear our voices and our names. So, let's do that now.

I am Kathryn Coltin and I am with Harvard Pilgrim Health Care in Boston. I am a member of the committee and chair of this workgroup.

DR. EDINGER: Stan Edinger. I am with AHRQ and I am the lead staff with the workgroup.

DR. LUMPKIN: John Lumpkin. I am director of the Illinois Department of Public Health and a member of the committee.

DR. ORTIZ: Eduardo Ortiz, member of AHRQ.

DR. HOLMES: I am Julia Holmes from the National Center for Health Statistics.

MS. KANAAN: I am Susan Kanaan.

MR. HEFFRAN: I am Henry Heffran.

DR. JANES: Gail Janes, CDC, staff to the workgroup.

MR. AUGUSTINE: Brady Augustine, Gambro Healthcare, member of the committee, member of the workgroup.

MS. JACKSON: Debbie Jackson, NCHS, committee staff.

MS. WILLIAMSON: Michelle Williamson, National Center for Health Statistics.

MS. ADLER: I am Jackie Adler, NCHS.

MS. COLTIN: What I asked people to do for the people who arrived late was to take out the document and try to -- in testimony and NCVHS reports, data issues in measuring quality. What I would like us to do is go through this section by section and identify what we think are the most important issues raised in each section and those for which we would like to consider developing recommendations and from a feasibility and practical standpoint, those where we believe we can develop recommendations given our roles.

I can start and get the ball rolling if you want and give you sort of my opinion or we can go around the table and do a sort of brainstorming session, just keep adding until we think we have a list of all of the issues that we think we would like to address in terms of recommendations.

I would suggest that we do it section by section.

DR. LUMPKIN: It seems to me that there are two broad issues that perhaps we can use in respect to context. One issue, which when you read through this, we really don't address. As a committee we have made some significant progress in making recommendations on standards. So, the question becomes at some point we ought to in our recommendations talk about the -- when we talk about the barriers, which comes up again, not enough data, not enough of the right kind of data -- to talk about which of the committee's recommendations that would actually solve those problems.

You know, the vocabulary recommendations that will be coming on standards and so forth. I think that would help set that in context and make this fit with other reports.

The second conceptual idea that I just have a little bit of difficulty figuring out where it fits in, but it is just like one of my biases, and that has to do with quality measurement, when the goal really is enhance quality.

Quality measurement is a very fudgy tool that we use at this particular phase to move knowledge down to care. Hemoglobin A1C, what we want people to do is to test that when they are treating diabetics. Ideally, if we know last year that people didn't do it, it doesn't do anything for those patients, but the vision that we have through the NHII and others is that ultimately the information systems will have an interaction with the clinician before the patient leaves.

So, in some ways we ought to at least try to tie this into the NHII vision and talk about how these are really steps towards that vision. So, those are just sort of the --

MS. COLTIN: So, more the quality measurement would be an output of the process in which improvement mechanisms are already imbedded and the data would be generated along the way as opposed to it being the driver necessarily. I mean, is that part of what you are saying?

DR. LUMPKIN: Well, I think that quality measurement is a tool right now. It is not the endpoint.

MR. AUGUSTINE: The way I look at it is quality measurement is really looking -- we are looking at outcomes, just like industry. The manufacturing industry has moved towards looking at process controls, things of that nature, having gateways and evaluating against evidence-based practices and looking at the process of care would allow us to maybe enhance quality at an earlier stage, just looking at outcomes and then being reactive to those outcomes.

MS. COLTIN: That makes a lot of sense to me in a quality improvement model, but what about an accountability and choice model?

DR. LUMPKIN: I think that the biggest issue in accountability and choice has to do with variability and there are fudgy ways to try to reduce variability, which is what we are talking about. And "fudging" is in quotes because they are the best thing we got.

But ultimately accountability at some point, you begin to identify the markers. You would want to have the feedback much sooner to the interaction. So, if someone is starting to deviate from the norm, you know, there is a host of reasons why that may be, but someone ought to be looking at it. That is where the accountability is.

Perhaps we don't want to get tied up into doing that, but at some point we do want to tie into the fact that we do have a bigger vision and that this fits as a very important step towards that bigger vision. Right now, you know, I think to the extent that we agree with the approach that the greater input an individual has and being able to make wise decisions about the health care, including choice and the best way to do that is having some sort of way of evaluating the caregiver or the caregiving system. That is where accountability is.

MS. COLTIN: I don't think it should be versus. I mean I think they really team up. We have to be able to do both and the data have to support doing both. The best thing is if the data that are needed to do both are data that are collected as part of the routine practice as opposed to layered on after the fact and that is where I think the work that is going on in a lot of the other committees is really important; you know, the features, functionality that is built into electronic medical records, whether it is reminder systems or, you know, other types of tools. They all hinge on collecting the right data, having the right data as part of the system.

I mean, we know when we look at medical records to day, that there are key pieces of information that are needed for quality measurement that aren't there. Will they mystically appear when we go electronic? Or will we simply take the problems that exist in paper records and translate them into electronic records? You know, providers routinely do a poor job of recording information about cognitive screening and counseling.

Will that improve with an electronic medical records system simply because we have made it electronic? I don't think so. We have to think about are those things important and are they things that, you know, we need to begin to routinize as part of the documentation of the delivery of care?

Otherwise, we are going to be doing the patient surveys all the time to try to get at that information because they then are the only available tools.

DR. LUMPKIN: But I think the answer to that question is, yes, if they are designed the right way. So that if one of the areas that we think that there are information gaps, which are important for patient outcomes, and clinicians aren't either collecting the data or they are not recording the data. The automated medical records then has -- would then have a feature of assuring that that is there.

MR. AUGUSTINE: One other point that is in here but I see it is only mentioned at one place is the need for business case. Most of the people who collect this data sort of worry about the bottom line. If there is not a lot of value, they don't want to -- in a lot of their systems. One of the things we did at my company is we have implemented a stigma in a dialysis setting, which is very process oriented. Sent a lot of people off to get their black belts and green belts and we mapped out the entire process of dialysis and have started looking at gateways in that process and doing a lot of analysis and have estimated that by reducing some variation there improving patient care can add about 15, 20 million dollars to the bottom line.

DR. EDINGER: Have you also looked at the issue of -- I talked to Chuck Darby about this -- some of the grantees have done like the patient satisfaction, just sort of trying to get it on line, actually gathering some of the information from the patient, also in the medical record that could be used for some of these evaluations at the same time.

Usually it is the medical vote information, but a lot of the patient background and, quote, satisfaction as to the -- usually is never captured, except maybe a year later when it is in the CAPH(?) survey.

MS. COLTIN: So, they are thinking about capturing it real time, capturing the patient's response to the interaction --

DR. EDINGER: Either there or -- either at the time if possible or shortly thereafter.

DR. JANES: Listening to this discussion, let me back you up then and ask you -- I thought that when I sat down and started listening to this that I knew what the recommendations were for, that they were the recommendations as to what would be in the quality report, at least the first round.

But it sounds like we have got a mixture of short term and long term issues that we are already looking at. So, are we going to be drafting recommendations about what the quality report is going to have in that first iteration or that second iteration or are we combining it with some long term issues in terms of where quality measurements should be going?

MS. COLTIN: I think what we are trying to do is identify what are the most important things that need to be there and then think about what the short term recommendations might be and what the longer term recommendations might be. So, I think that is more how I would see it progressing.

There are a lot of things that potentially could be fixed. Some of them may not be worth working on. Maybe there isn't a business case for improving some of these things right now and maybe the best way to improve them really is to wait for something that is in the longer term, rather than try to put band-aids on, you know, a broken system.

So, I think that is part of what we need to deliberate as we look at the problems. I think what I was trying to get us to do is identify what are the key problem areas and issues that we would like to make recommendations about. Once we do that, then we can decide what do we think -- is it worth doing something in the short term and making the recommendations in the short term or do we think that really this requires a longer term fix and we want to make a recommendation that relates to the longer term.

In doing so, I think we need to pick up on John's point. In some ways, some of that is in here because some of the themes were actually taken from other reports that the committee has done. Some of these are our own issues that came up either in the Medicaid managed care report or in the functional status report or in the NHII or the core data elements report.

I mean, we tried to pull themes and issues not only from the testimony, but from other work that had gone on in other subcommittees that we knew were pertinent to the issues that we were hearing about here. But we may not have gotten them all. So, I mean others who are familiar with those reports might say, you know, the -- report also recommended this and that would be great, I mean, to hear that and really bring the recommendations together so that we are not doing a report that, again, is kind of a side level approach that we came up with in our workgroup, as opposed to being aware of activities that are going on in the -- a more coherent picture.

MS. KANAAN: Just a couple of things. One is just to make sure that we are all on the same page, that we are not talking about the AHRQ quality report. We are talking about our own report.

I wasn't sure, Gail, whether you when you said the quality report, whether -- we are talking the NCVHS recommendations to the world, I guess.

The other thing was just to -- vis-a-vis what John was saying about the big picture in NCVHS are the recommendations. If you look at the sort of condensed outline, Roman numeral V, when Kathy and I had our latest conversation about structuring this report, it seemed to be a good idea to follow the recommendations with another discussion of what Kathy calls the opportunities for implementing them.

That is where we thought we would tie to -- you know, all of the synergies with the other recommendations. So, of course, NHII is included in that list and the other ones that Kathy just mentioned.

DR. EDINGER: When we get to the recommendations, one of them is obviously electronic medical records and what are the things you can do if you have one. But we have also got the issue is if we don't have one sometime in the next 20 years, what do you do --

MS. COLTIN: Yes. I think that is why we are going to this document because we will talk about administrative data and say what is it that we could do to fix administrative data today. Are there short term fixes, things that are worth doing while we wait for electronic medical records.

Even once the standards are out there, is it going to take time for its diffusion of innovation to occur. We know that. So, it could be again several more years before we really see widespread adoption of electronic medical records, especially in small practices.

I think it will occur faster in some states than it will in others. I mean, the states that have more highly organized group practice structures are more likely to move more quickly; whereas, those that have a lot of doctors practicing in one or two person offices, they are going to need to see the business case for it.

So, I do see us making some short term recommendations.

Can we start with taking a look at the general questions. Are there any points that you see in that section that you think need to be sort of pulled out and highlighted for more attention in terms of, you know, potentially making recommendations?

I thought of two that we can discuss and then if other people disagree with me or -- one is the comment that we made in 1996 by NCQA, where they identified additional data elements or types of information that would be really important to have to further quality measurement, without making a particular recommendation as to where that should come from, whether it should come from an electronic medical record or administrative system, but really sort of what are the priorities of the types of information that are currently either lacking or difficult to ask that. I thought this was helpful in terms of, you know, sort of a categorization of these types of data elements; sociodemographic characteristics of enrollees. I think we can broaden that beyond enrollees.

This happens to be testimony given in the context of looking at measurement for health plans but it is really population characteristics more broadly in whatever setting you may be measuring; information about race/ethnicity, about educational levels so that you can target intervention materials more appropriately.

So, I think this list gives you an idea of data elements that are currently not widely available, quality measurements. Really, in many cases the medical records may be the only source and the medical record doesn't hold up really well on some of these either.

What do people think about that one?

DR. LUMPKIN: The other reason why this becomes -- you can look at a population and not see disparities and those disparities reflect differences in quality, which may not come out unless you stratify the database.

MR. AUGUSTINE: -- we see when we analyze the SRD data, when we look at it broken down in that way. There is a lot of differences that come out that you don't see when you look at the big picture.

MS. COLTIN: We are starting to see that, too. I mean, even in some of the best health plans in the country that have wonderful overall scores, we are starting to begin to dig down and separate the data.

DR. JANES: Are speaking specifically of separating it on race/ethnicity variables?

MS. COLTIN: I am thinking of -- well, in this example that would certainly be, you know, one example of sociodemographic characteristics you might want to look at. But we have also found striking differences by the educational level regardless of race and ethnicity.

DR. JANES: Do your folks collect race and ethnicity at this point?

MS. COLTIN: We don't in our own files but we have done file linkages that have been able to look at some of that, with the Department of Public Health. We have linked our data with the Department of Public Health. So, we link to the birth certificate data. We can look at all of our deliveries by race ethnicity because the data is in the birth certificate.

We added questions to the BRSSS survey to ask explicitly what health plans people were enrolled in. We were then able to look at the BRSSS data by health plan and see differences because race and ethnicity data are collected in that process. So, you know, we have begun to do some innovative types of linkages to enable us to look at these questions. It is skimming the surface because we don't have the data routinely and we can't look at everything that way.

So, that is part of the reason I think it is a really important issue is getting that data and making it part of the routine data --

DR. JANES: Oh, I agree. I mean, I am in the middle of a project now with ANSI(?) Colorado, a huge provider out in Colorado and we would love to have race/ethnicity data and it ain't there. They don't collect it. They said we don't do that.

MS. COLTIN: Well and there are barriers to doing it in HIPAA. So, I mean, to me that is an area where the Population Subcommittee will also be making broader recommendations as part of what they are doing. But it doesn't hurt to piggyback on that and make our own recommendations as well because their recommendations will be raised on much broader issues than ours might be, but, you know, it depends on how salient a particular issue is at the time or whether it seems to be a really salient issue at the moment.

DR. ORTIZ: Yes. The only comment I would like to make about that is I agree that in principle we really need to learn to expand this kind of world of quality measures beyond what has traditionally been looked at. But on the other hand, I would also warn that we also have to be careful that in saying that we have to do a good job of what we are supposed to being doing now, with just basic quality measures. We need to be careful that trying to expand too much ends up being detrimental to our efforts.

I think on one hand I think it is important to support the fact that there are other data out there that we really need to get that could be helpful, but we really need to look at these things individually and say, well, what are the most important things that we need to do now because, otherwise you try and do too much and you really get nowhere. So, that is the only caution I would give.

MR. AUGUSTINE: The easiest way would just be to link these silos, to get the information that is in these disparate sources together so they can be compiled, understood.

MS. COLTIN: This particular recommendation has more to do with what data elements are currently available as opposed to, I think, the next one, which also is worth considering. The one about chief obstacles to data sharing gets more at what you were talking about. Yes, we have all these silos and what are the problems in trying to make connections across these silos and talks about issues a bout problems with information, infrastructure and the ability to have data that is compatible and can be linked across systems. I know later on we will talk about the lack of standard identifiers that, you know, have taken so long to -- we still don't have a national provider identifier to be able to link data, that there are now additional privacy concerns about data sharing and linkage and the protections that need to be put in place and what impacts may those have on the ability to evaluate quality of care, particularly across settings or to pool data across organizations.

Any one organization may not enough cases to look at something. Across organizations it would. So, there are a lot of those kinds of issues that I think are broad and far reaching issues that are encompassed by that next bullet as well.

So, I would recommend we consider that one for inclusion in making some recommendations related to that. I think these are two kind of really broad issues that are raised here; data elements and technical issues.

DR. EDINGER: -- interrelated to the sources of the data, which is No. 2?

MS. COLTIN: I think No. 2 is just a commentary about how to organize the rest of the information. Susan went ahead and organized it that way. That isn't really a recommendation.

DR. JANES: I thought you were pointing to the one below chief obstacles, which I had also starred, which to me kind of go together. What are the obstacles and what are the sources --

DR. EDINGER: -- on the gathering the data or -- they are sort of interrelated.

MS. KANAAN: As you can tell, this first whole list is kind of a -- I threw a lot of different things in this basket, some of which are points to make in the introductory narrative, like you looking under the lamppost. Some of them are useful lists that people compiled, like the ones that you just mentioned and some of the things are just kind of thoughts about how to organize it. It is a hodgepodge.

MS. COLTIN: Anyone have other candidates for inclusion on that first list of general notes?

MS. KANAAN: I put Brady's point about needing a business case in this general list. So, I don't know if you want to flag that one, too.

DR. JANES: I think your point about thinking of this in terms of splitting out into two pieces an overall narrative -- then the recommendations and perhaps, you know, your kinds about the business case could go into that first narrative because I also had noted the overall question of the extent to which poor overcomes, why do we care, which I think all of these would fold into that.

DR. LUMPKIN: We would call it documenting the --

MR. AUGUSTINE: There are some business cases out there. That is one of the first things quality people get asked when they go to their CEOs. They will show me the --

DR. JANES: That flows into all these questions about when people say business case and everyone walks away from the table with a different idea of what represents business case. In public health, we pull out our cost effectiveness data and if you are working with private sector, they want a return on investment with a 12 month window.

DR. EDINGER: [Comment off microphone.]

MS. COLTIN: Yes, it is a different time frame. You are right.

MR. AUGUSTINE: One other point that is important to me personally and it is not important on the population base, but when you start looking at subpopulations, it is giving good risk and severity adjustment data. Quality is hard to measure and harder to hold accountable when they are always going to say, hey, my patients are sicker. Isn't that the argument? If we don't have good data for that, a lot of the comparable measures where we are looking across domains, it is not going to be as valuable or as actionable.

MS. COLTIN: I think that you are absolutely right. A lot of what motivated some of the suggestions in the NCQA list were data elements that they thought would be really important for risk adjustment. So, test results, for instance, have multiple purposes. They can actually be the outcomes measured in terms of, you know, the results -- that they want to see. They can be simply evidence that the patient actually has a test that was recommended because you can have an order versus a result. They can also be in terms of the value used as a risk adjuster or a severity adjuster when you are looking at patients who have diabetes or other problems.

MR. AUGUSTINE: I am kind of in an enviable situation in our company because we see our patients three days every week and we have social workers and dieticians and nurses, doctors and we capture everything you would want to know about our patients from any of the financial information, caregiver, health status, patient satisfaction. So, being a statistician and economist, I kind of have a fun time playing around with that treasure trove of information.

DR. JANES: Why do you see your patients three times a week. Is it a dialysis --

MR. AUGUSTINE: Dialysis.

MS. KANAAN: Is the issue with risk adjustment both the need for better data and better methodology or is it really just data.

MR. AUGUSTINE: It kind of falls into actual measured ones as well because if it is not something that is going to -- you can enable buy-in from the providers.

MS. COLTIN: I think there are a lot of arguments about that that I wouldn't necessarily want to get into in this report. There are a lot of philosophical arguments about when to adjust and when to stratify and what do you hold providers accountable for. You lose the whole picture on disparities if you adjust for race and ethnicity as opposed to stratify -- so, I think I would rather think about this issue more in terms of what data are needed to be able to do either one when it is considered appropriate to do it, but not get into the philosophical discussion of when is it appropriate to do it.

So, that is why I was saying, again, I will bring the case mix discussion back and the QA recommendation about the data elements and related to that, that a lot of these data elements are needed for multiple purposes, including case mix adjustment.

Can we move on then to the administrative data issues? I had five that I thought were important to highlight. I would like to hear what other people chose and see whether we are -- we converge on what we think are important.

DR. HOLMES: I thought the code set limitations were a very important area, not only in terms of lack of standardization or lack of available codes, but, for example, for certain types of benefits, you know, for example, fee for service. If they don't cover an office visit, then data will not be collected that a patient had an office visit because administrative data is primarily used for payment purposes

So, I think that is a very important issue.

MS. COLTIN: I would agree with you, too. This came home to me the other night. We were presenting a project on -- medical group performance to our physician board and there was a doctor there from Cape Cod, where most of the physicians are practices sort of cottage type -- and he said we have a grant down on the Cape for normal mammography screening center and a lot of our patients get their mammograms there. And he said I get a copy, but I don't send you a bill because they pay for it.

So, he said my data are going to look like I am not doing mammograms and, in fact, I am because there is no way through the current -- submission process to submit a code and say this isn't for a bill. This is just for information.

You know, a lot of them actually would like to do that because they like the fact that we will give them back their performance -- because they don't have the system to do themselves, but they don't want to get data back that, in fact, don't -- performance represents a data problem. This is an example. If there were codes that allowed you to capture this type of information, not for billing, but simply to say -- you know, for oversight. Then they would be able to put that mammogram on the bill along with anything else that they were doing at that visit, simply as a notation that you had it and we would be able to collect that information and use it.

Now, granted, that is not the most efficient way to handle this because it is an extra step of recording an extra piece of information, rather than having it come from an electronic medical record. But as an interim step, it is something that with pay for performance gaining some momentum, there is some motivation on the part of the physician to provide that kind of information.

DR. HOLMES: Sure. You could make a good case for it.

MS. COLTIN: There are bonuses and things attached to performance on these measures.

MR. AUGUSTINE: That is what happened with HEDIS in the first four or five years. It wasn't really measuring improvement. It was measuring improvement in data collection. Mammography information was -- I didn't pay a lot of attention to it because -- they were going out and finding these mobile vans that were providing immunization and gave them that information and entering it as encounters or what not.

MS. COLTIN: For some of them it was and in some cases the doctors didn't have that information. So, getting that information actually potentially does improve quality of care. I mean, I think having better information sometimes is a reasonable goal in and of itself so that you can make better decisions. But you are right. Some of the early improvements, I think, were improvements in data collection.

MR. AUGUSTINE: And a good point you make is the fact that physicians, the holder of the medical records, should be the driver of this and not the HMO or the --

MS. COLTIN: The AMA announced a couple of years ago that they were going to adopt what they call the Category 2 CPT codes. There were two types of codes that were going to be covered in Category 2. One was for new technology so that in order to -- before a code met all the criteria to be a real CPT code for billing, they could use the Category 2 codes for these new technologies so that, first of all, they could get some sense of volume of use, which is one of the criteria for qualifying for a regular CPT code and that they could do the effectiveness studies.

The other category in -- the other type of code in the Category 2 codes were performance assessment codes. So that there would actually be a category of CPT codes that could be used to record that a patient had something -- or even to record an outcome level, like a blood pressure level that is between this and this, you know, range.

Those codes had been suggested through a process that the AMA had initiated under their CPT-5 development. It was all agreed. It went up to the editorial panel. It got announced. Little glossy brochures got mailed out that they were going to exist. As far as I know, there are none that have actually been approved and implemented at this point.

DR. LUMPKIN: I just wonder and I think perhaps we just need to talk about methodologies because I think, you know, a separate code is fine. A check box and the 837 that says the following codes are purely for documentation purposes and just use the standard CPT-4 codes would work. I think there are a number of ways, but I think we need to

-- for this point is to point out that that is important data that is missing because there are other ways that people receive services, other than just the standard billed encounter.

DR. JANES: This speaks very strongly to a lot of preventive services and a lot of performance measurement is a preventive service. You touched on a couple of -- certainly adult immunizations are a big issue.

What about the note you have got at the bottom. Data collection hasn't been streamlined, multiple collections. Could you sort of fold those two together? Because I think at some level that is what we are talking about. That is the data that we require is coming in in different pieces and there are different drivers and how do we kind of pull it all together.

MR. AUGUSTINE: One of the issues on here that I marked that I think is really important and I know it is a political hot potato is the standard identifiers. I mean, when you look at how people are bouncing around from health plan to health plan -- what is -- the average length of time of a health plan is a year, two years? So, doing any long term observational research is very, very difficult if you can't bring information together from different health plans.

MS. COLTIN: Yes, and I think that relates to the second point, too, about privacy concerns and around data sharing and linkage because even if the identifiers existed, linking the data across organizations, when a patient leaves one organization, goes to another. That is not just an HMO. That is changing doctors or whatever. Now you can sign releases and the paper records are transferred. There will need to be mechanisms when patients have electronic records for transferring that kind of information. I think that is a big part of what we heard yesterday in terms of --

MR. AUGUSTINE: And even in a single MCO, you may have an issue with the lab data, the pharmacy data. They may have different identifiers as well.

MS. COLTIN: That is right.

Other key points there?

DR. LUMPKIN: I would suggest that we would recommend a series of hearings on those, beginning in 2004, in July of 2004.

MS. COLTIN: About the identifiers?

DR. LUMPKIN: Yes, because I cycle off the committee in June of 2004.

[Laughter.]

MS. COLTIN: I think we would need to take more testimony around the code set limitations, too, and I think that probably something best done jointly with standards and security, not something that we go off on our own, but something we ask -- either ask them to consider or do jointly.

DR. LUMPKIN: Actually, the NHII group yesterday, as part of our agenda, we looked at -- looking at it from the patient personal health record aspect, the issue of identifiers to be used to develop your own record. I think we are going to try to approach it from that way.

It may be a different approach than trying to deal with it the way we did before we got our lunch handed to us.

MR. AUGUSTINE: One of the things that we have been talking in Standards and Security a lot about is the code gaps. Like you saw yesterday, when an alternative link presented, we have been looking at a lot of different issues and that would be good maybe to do that and discuss any gaps that you see that you -- that we need to have addressed.

MS. COLTIN: That came up in the testimony we heard from the behavioral health folks, mental health and substance abuse, that they didn't feel the code sets adequately captured what they do as well. So, I think that is a broad one, that code set limitations where there are a lot of separate pieces of it that we could actually tease out and talk about.

I would actually say in thinking about the obstacles related to administrative data, that the code set limitations and the earlier -- going back to the earlier list of things we would like to have, like the sociodemographic characteristics, relates back to this area of limitations in administrative data, the fact that we don't have race/ethnicity in administrative data, for example.

So, even though that comment is in the general note section, I think there are particular data elements, referenced in that comment that can be brought in under the administrative data section as well.

DR. EDINGER: We have to look at the issue of gathering the medical history because some of the sociodemographic information and some of the other data is in the medical history. That usually is not captured in most of the medical records systems because it is mostly the output from the physician rather than what is the input from the patient side. That is a whole different set of problems, I know, but --

MS. COLTIN: I think I would address medical history more in a data linkage framework. Everybody has got piece of a history and nobody has a whole picture. It is sort of like, you know, the personal health record with different providers having a piece of it, different insurance companies having a piece of it in terms of claims histories.

So, my sense is I think what they were getting at when they made that comment was everybody has got a piece of it, but nobody necessarily has the full picture in terms of a fully integrated medical record that contains all the history or all the information that may be held by multiple providers.

DR. EDINGER: No, I was thinking of in terms of what John mentioned about the National Health Information Infrastructure. One way of approaching it is for the patient to have an organized format for their own medical history because every time you walk in you are asked your medical history wherever you go. You can go three providers in the same clinic and they will ask for your medical history. It might be a nice tie-in for getting patients to buy in to having a format for gathering your information.

MS. COLTIN: [Comment off microphone.]

DR. LUMPKIN: What we are looking at is around identifiers at every encounter you have a choice of using -- and choice has to be put in parentheses -- of using your identifier or you have a non-identified encounter. The reason why the choice is put in parentheses, you use a non-identified encounter. You are going to have trouble getting reimbursed for it.

So, you know, there are some methodologies and we are going to do some hearings on those not next year, but probably soon. About those kind of issues and use it from the patient viewpoint, giving the patient more control over their medical experience.

MS. COLTIN: Just taking a look back at the list of data elements that is in the NCQA comments, I mentioned sociodemographic characteristics as being one area where some of those maybe, but not all of those, but some of those could be captured as part of administrative data sets; enrollment files, for instance, or registration files in the provider setting.

Are there others on that list that you believe belong or should be commented on under administrative data limitations?

DR. HOLMES: Well, certainly results. Very oftentimes test results are not included on administrative, for example, insurance administrative types of records.

MR. AUGUSTINE: I think I was saying earlier is a lot of times MCOs don't even have in house. They use a PBM or something and they don't even have medication data in house to analyze, to bring together with their other data source and the same with the lab data, lab results, the same with behavioral information.

DR. EDINGER: In terms of the med data, would you include not just only the drug, but the dosage?

MR. AUGUSTINE: All of the complete data set.

DR. LUMPKIN: I just want to clarify or understand that we are not necessarily talking about making administrative more robust, but we are looking for those things that are not part of administrative data. So, for instance, lab results, while that may be important for quality, I don't think we want to attach that to administrative data unless there is some specific reason for that to be attached.

DR. HOLMES: Or make some sort of linkage possible so that the administrative record can be linked to test results.

MS. COLTIN: There actually are two ways -- I mean, in the 837 there is actually a field for test results. It is a paired field where you have -- you put in the CPT code and then you put in the results so that it -- and it is a loop so that can put in as many CPT codes with results as you want so that if in a lab contract where a plan is getting claims from a lab company, they were willing, you know, to provide that data, the 837 actually already accommodates it. So, it is just that most companies are not -- most labs want to charge for providing the data. So, most insurance companies are not buying it right now because it is quite expensive.

But it is not necessarily a limitation in the transaction. It is a limitation and that most are not going after, most payers are not going after it. The other thing is my understanding is when we see the claim attachment standard come out that there is a claim attachment format for lab results so that you could, in fact, if you got a bill for a hemoglobin A1C, you could generate a request for a claim attachment that would provide the result --

DR. LUMPKIN: My concern is is that by doing all that stuff, essentially the payer is assembling the medical record and, you know, if the conceptual model is minimally necessary, you have succeeded. So, while there is information that is valid and important for quality, it really has no need to be connected with the administrative record. We are just doing that because that is all we have.

So, I think in our recommendations we have to be very careful that the data that is used for quality measurement should be stripped in many ways or held separately from data that is used administratively. You know, who are you going to renew the plan on? You know, if you are looking at all this other data, you are making decisions that ought not to be made with that information.

MS. COLTIN: That is a good point in terms of how it gets generated. Whether it is generated as part of an automatic process or whether it is generated as a side process that makes use of those channels but keeps the data separate.

DR. JANES: Which is pretty much the way it is now.

MS. COLTIN: Well, I don't think those channels are being used.

DR. JANES: No, but they could be and the data could -- in coming in on an 837, they could be stripped off and put into a separate database, not become part of the claims payment database, for example.

DR. LUMPKIN: Differentiating the data needed to adjudicate a claim versus those that you need to monitor a claim.

DR. ORTIZ: I think that is an important point. Now we do have -- I mean, not everybody, but they do have databases, you know, that collect the pharmacy data and the lab data and it is a matter of the people who are involved in quality being able to access those and link them, but I don't think administrators and those people need to be accessing this type of data. That would be very bad.

MR. AUGUSTINE: The company I work for is got like 600 analysis clinics all over the world and we capture all this information and one of the first things I did in my company was we built a data morgue that brought together the med lab or the pseudo EMR and our administrative data and put it into one data warehouse, where we developed outcomes reports, process reports, physician profiling, things of that nature. It is a huge, big, good tool to use, but it takes awhile getting that information all on the same page and doing all the mapping and understanding the business rules because what means something in one place may not mean something somewhere else.

MS. COLTIN: Okay. That is a really good point. Make sure we highlight, that if we are asking to have the data come in, that we talk about how it ought to be handled when it comes in.

MS. KANAAN: I guess I will, if I may, just say one thing. I am still trying to get the big picture of this whole area. So, I am coming at this a little differently than the rest of you, who know more about it. Also, as I think about how to help you write this report, the way I think about it is generally what do we need to know, you know, and why, which we have in our outline. Ideally, how do we get it? Or ultimately, how do we get it?

Then in the short run how do we get it in the meantime? You know, in a way it would be helpful to me if it is not totally inconsistent with your process today, if we kind of followed that thought process a little bit, rather than kind of building the report by accretion, you know, bit by bit. So, if we can kind of --

MS. COLTIN: I would basically say anything that we are talking about right now, which are changes to administrative data falls in that latter category that you were talking about, while we are waiting for an electronics medical record, here is something we can do. So, I think we are kind of proceeding this way because that is the way this is organized in going through it. But I think you are right. Maybe as we talk about each one, we need to tell you which category it falls into. This whole discussion around administrative data, I would suggest, falls in that what do we do in the meantime. Okay? Even though we are discussing it now first, it is really down the line one day.

Was there anything else that anyone wanted to point out from that section as an issue we should highlight?

MR. AUGUSTINE: The only two -- the payment arrangements one is a common issue because of -- which is a lesser problem now. There is less capitation but captured on administrative data under a capitated arrangement is difficult because there is no incentive for them to send in the information if they are receiving capitated payment.

Of course, people will come up with measures now of saying we will give you -- I get what we used to call capitation corridor. We would pay you a PMPM(?) and then plus or minus 20 percent based on quality and utilization measures. The quality ensuring that you send in enough information to say you did a certain amount of services and utilization to make sure you didn't send in too much to try to hone them down to exactly what services that were provided. But that is definitely something on the administrative side that could still be a concern.

MS. COLTIN: Well, this is one of those cases where I looked at it and I said, yes, it is a problem. Where does this fit? We can't tell health plans how to pay and what their paying policies should be. But we can look at perhaps what some of the data accommodations could be so that if, in fact, you create the right incentives for the data to be submitted, you know, I think it is more important to create the incentives to submit the data and then they figure out how to get it, but do the data systems accommodate being able to do that?

I think one of the issues, you know, another big one that comes up is whether or not you are under capitation, even if you are under fee for service, is bundling of information so that, you know, you get this one global code and the plan's payment arrangement with the doctor is, well, you don't have to send us separate immunization codes to bill for the administration of the vaccine. Just give us a well child code and we build that into how we price it.

Then you never get the CPT codes for the immunizations. So, you can't measure it. So, you end up either, you know, having to do medical record reviews or surveys or other means of doing that. So, a lot of that is payment policy issues that are developed, you know, in the health plans. Again, we can say we think that payment policies should encourage the submission of complete data, but we don't have any control over that.

MR. AUGUSTINE: Don't you mean in the contract specifications like when they negotiate a global rate for pregnant -- for delivering a child and that includes everybody and they --

MS. COLTIN: That is the kind of thing people raised in the --

PARTICIPANT: Contents was another big issue --

MS. COLTIN: Right. This is an area where I think it is worth spending a little bit of time because it comes up in some of the other types of data sources as well. You know, do we think it is worth our while to make recommendations in areas that we don't have any influence necessarily? These aren't things where it has to happen in the private sector and we can say it is a problem. We can point it out as a problem and, you know, make, perhaps, a value judgment, but in terms of specific recommendations about what could be done, do we want to go there? So, I would open that.

DR. ORTIZ: I was just going to say that in a way to me it sounds like it might be better to broaden that out to kind of disincentives or something and then we can recommend what some of those are. I mean, payments may be one way of doing that, but there maybe other methods of incentivizing the collection of this data beyond just financial arrangements and I think that might be a good way to frame it. Although we can't force people to do that, we can make recommendations that maybe they just haven't even thought about. They didn't even know it was really an issue.

DR. LUMPKIN: I think the other piece is to talk about in the post-October 2003 world, I doubt that there are very many providers who are solely capitated. They may be capitated in relationship to one plan. So, they are going to have either the capability directly or through contract to generate an 837. So, we can make as a recommendation that they would utilize that methodology for documenting things that are done, even in the capitated or other payment arrangement because the cost to do that should change if they are routinizing their practice to generate the 837s.

MS. COLTIN: There is wide variability and misperception about what capitation does to the submission of data. I think a lot of it is a contractual issue. I mean, we for years had huge segments of our delivery system capitated and we received very good, very complete encounter data from everyone of those doctors because we readjusted the capitation every year based on those encounters and that information. We prized them.

So, it is not capitation. It is how you implement it.

DR. JANES: I clearly stand from quite a distance from this world, but I mean this was, at least in our shop, our assumption from day one, when, you know, we have watched us move to our capitation was that, you know, was the encounter was going to look -- the encounter was going to look -- the encounter data was going to look basically like an 837 and then everybody was going to start to collect these things if for no other reason than because, particularly under a capitated situation, you are going to be doing utilization review up the wazoo to find out who is spending what and where you can economize.

So, I still don't entirely understand why in some settings you do have that encounter data and in others people go, hey, we are operating under a capitated system. We don't collect those data. You know, it still leaves me --

MS. COLTIN: It amazes me, too.

MR. AUGUSTINE: If anything, just to understand whether capitation is adding value to the organization, not to collect that information just amazes me.

MS. COLTIN: I think we moved away from it for a lot of those reasons, but I have heard an awful lot of physicians talk about, well, they don't like bearing risks, the incentives under capitation were actually greater because they could innovate and do things differently. They could do e-mail. They could do things -- they didn't have to worry, oh, I don't get paid for this.

MR. AUGUSTINE: The reason I understand it didn't work very well was because they didn't -- physicians' offices, practices didn't have the ability to really manage risk from a financial perspective, like a stock loss. There would be some catastrophic patient would come through and really throw them for a loop and they just didn't have the information, the data gathering, the monitoring ability to really manage it and those catastrophic patients were very difficult.

MS. COLTIN: I think that is right. I think it goes back again to the data. I mean, you had -- whether the problem was that the plans don't have the data and so they are just increasingly capitations when the groaning gets loud enough, you know, on the provider side or the providers don't have. The plan is getting all the data, but the provider is just kind of passing it through and not keeping it, not having any kind of a data repository or -- so, they can't look at their own experience and they have no clue how they are doing.

So, you know, a lot of it is sort of people's appreciation for the importance of data. But I kind of want to bring us back to -- let's say they even appreciated it. Let's get back to the fundamentals. Even if they had it, where are the doors because that is really what we are trying to focus on here. I want to bring us back to the nitty-gritty of the trenches, rather than the big issues.

But, you know, what I was putting out there was really this question more broadly of do we only make recommendations about the nitty-gritty issues or do we want to acknowledge that there are other things going on that affected this? If we got this perfect, if the administrative data did have all the data elements we wanted or had test results and whatever, do we ignore the fact that there are arrangements that may, in fact, discourage the collection of the data, the submission of the data?

Do we want to weigh in on that at all?

MR. AUGUSTINE: The two components of good data are completeness and accuracy or specificity in the sense that you want to have as many data elements as possible describe an encounter. Also, you need to have the completeness of the data set.

These are the two areas we are talking about. Up here we are talking about having all of these data elements, but here we are talking about completeness of the data set, which is -- you know, if we are going to make a recommendation that they collect these data fields, we should also make a recommendation that they have a means for collecting encounter information or other information as well.

MS. COLTIN: So, you are saying regardless of payment or contracting arrangements, it would be important for organizations to collect encounter data for purposes of quality measurement and utilization analysis and identifying disparities and all of those good things.

All right. Can we then move on to surveys? I think perhaps what I will do because I don't want to miss out on the opportunity to talk about the recommendations to AHRQ, would each of you go through this list -- you should have this electronically. If you don't, we will send it out again. Maybe if what you could do, you could use the highlighting feature if you want. Highlight the ones in each section that you think would be important for us to develop recommendations around and e-mail those to me.

I will create a list based on that, send it out again in the format of a table and what I would like to do is identify options for things we might want to recommend. So, I will try to group these where it seems to make sense and create a column, you know, in a table, where you can then add in suggestions for areas where we might want to make recommendations. I will get it back out to you.

I am hoping that, you know, in between now and November we can do some work by e-mail, if that is okay.

DR. JANES: Remind me of what the time line is, please.

MS. COLTIN: Well, we were hoping to have a draft report to discuss in our work group at the November meeting and then to make some changes in that and probably circulate it more broadly in the next draft before the February meeting, for some initial discussion with the full committee. In all likelihood, they will send us back to make some further edits and additions. But that is the time line we are looking at.

Is that right, Susan?

MS. KANAAN: I think we have moved things back a little bit from our original --

[Multiple discussions.]

MS. COLTIN: I think I have everybody's e-mail, but if not I will send it to probably Debbie. Then you can just forward it to everybody. Okay. Great. That will be terrific.

So, again, once you have it electronic -- I don't care how you do it, whether you want to underline it or bold it or highlight it or whatever, but as you go through each one, just pick out the ones that you think are the priorities, the ones that we should comment on. If you have any comments you want to make about them, you know, you can obviously insert some comments there, too, or even things like that you think it could be grouped with something else would be helpful.

I will get that back out to you. If you could get those to me by the end of next week, would that be reasonable for everybody? Then I will try to turn that around and get it back out to you within another week with a column for recommendations and then if you can take a look at that, I will give you two weeks roughly. We will be at that point maybe around the third week in October that I would like back from you suggestions about potential recommendations. I can then compile that and that is what we can use as the basis for the draft recommendation section in the report.

It will need a lot of work and a lot of refinement, but that is what we will plan on spending most of our time at the November workgroup meeting doing. Okay? Is everybody all right with that? All right. Great.

Then I want to move on to our next agenda item, which is to discuss whether we want to make specific recommendations to AHRQ around the National Healthcare Quality Report.

We were asked to help AHRQ -- Ed, if you want to come up to the table and join us, that would probably be good -- we were asked to assist AHRQ in gathering information from the field at large around the draft of the National Healthcare Quality Report, particularly on the framework and measures that were being proposed for the report, not the text of the report itself.

We held a hearing in Chicago on July 25th, where we took testimony from some of the larger provider and payer organizations. Then separately AHRQ put the draft on its web site and solicited public comment on its proposed framework and measures.

So, I guess the first thing that we need to decide is in the past our role has been to gather input from the field at large and synthesize it and make recommendations. Input from the field at large has come in through two separate vehicles, directly to AHRQ and through our testimony at the July meeting.

Should we, in fact, do what we did with most of the HIPAA standards, just summarize the testimony that we took and allow each of the agencies -- they got lots of public comments through other vehicles -- or should we try to work with AHRQ on synthesizing all of the information that came in through the public process, look at that and say do we want to weigh in on any of the issues that were raised, whether they were raised in our hearings or whether they were raised in the public comment process? Do we want to weigh in on any of those as a subset or as a workgroup and make recommendations that the full committee do so? That is really the question on the table.

First question, do we want to weigh in with our own recommendations or do we simply want to have been a facilitator or conduit for getting this information to AHRQ? If we want to weigh in with our own recommendations, do we want to base those simply on the testimony that we took or do we want to try to work with AHRQ to get some information about areas that were raised in the public comment period so that we can see a full picture of what is coming in from the field and then weigh in on those things that we think are most important?

Preferences?

DR. JANES: A point of clarification. Since I didn't join until relatively recently, when you talk about the testimony we took, the only one that --

MS. COLTIN: July 25th.

DR. JANES: And that is it?

MS. COLTIN: That is all there was.

Why don't I listen to the opinion of the folks from AHRQ? What would be the most useful to you?

MR. KELLY: Well, for -- and some of you I don't know. I am Ed Kelly. I am now the keeper of the National Healthcare Quality Report. Tom Riley, the director for the report, has gone on to greener pastures. He has got a nice job at CMS.

But things have been moving forward on the report nonetheless. I guess we would -- our sort of approach to the report and feedback has been the more the better and this group knows a lot of the issues almost better than any of the groups that we have asked for feedback from. We have three strategies in general for getting feedback on the report.

If I could back up, we sort of felt that one of the biggest accomplishments of the report team so far has been getting consensus on what are the important things to talk about in this report, getting consensus that these are important things to measure and, you know, have at a national debate level.

So, having an open process was really important and actually I was struck by universally all the people who testified actually out in Chicago were really appreciative of NCVHS for putting this on for AHRQ, for taking the comments. So, that said, I think with that in mind, we would love it if you all would weigh in with some information and opinions.

The status of the feedback, you already have the July information. The status of the other feedback -- the reason we had to do another one is because ASPA had done the announcement for us for the July meeting, as they do for meetings in general, for HHS. Technically speaking, they could only do a Federal Register announcement for the meeting itself. They couldn't do an announcement also for feedback on the measures set. Anyway, that is the way it was. It had to be a separate announcement. By the time we got that done, we had to extend the deadline until September 18th. There was no other reason other than we had sort of two sets of information.

September 18th was the closing date for that kind of second extended deadline and we are now compiling a region book on this, which has -- as we said in our announcement about July 25th, it will be made available at the Parklawn HHS building for public viewing and then we will be putting it together for this group and for our interagency measures working group that will look at it and consider what to do.

That will probably be available next week and we can, you know, as soon as it is done, make it available to this group. The time line is now -- you know, we have had about three years working on this report and we are now getting down to actually when we sort of have to write the thing. That is the rough part about doing a report. You have to write it. You spend a lot of time thinking about the design of it and then you actually have to put words on paper.

But because the clearance process is such, it needs to go into clearance April 1 is sort of our drop dead date. We have allotted six months for clearance. You never know how it is going to go, but that is basically how we will see it playing out. People who have done this a lot tell us that is what we should do.

So, working back from that, that means we have to be active in making some decisions about this preliminary measure set and what is actually be in the report. Obviously, I think this group's opinion, whenever it came, we would be able to figure out some way to consider -- obviously, the earlier the better. The status will be that next week the binder -- the feedback information will be ready a week to two weeks after that, the Interagency Measures Working Group. We had a large interagency group that gave us advice in general on the report and a subgroup of that, the measures group, which in part derived from some of the QUIC(?) members, gave us specific advice on quality measures.

That group will meet, make some summary of the feedback and make a recommendation to the larger group. It might be -- which will then be sort of later in late October or early November. So, I just mention these kind of points along the way. There are obviously different points where we could consider this group's input. Following that, it will then go to our management group at AHRQ.

So, those would be three points. The measures subgroup will probably be coming up a little quick for people to be able to weigh in. The bigger group, that might be an appropriate time and then we can let you know when we set that up and, obviously, our consideration will be ongoing.

MS. COLTIN: Well, it sounds like -- I mean, we have at least served a function of the facilitator and channeled some information to you through the July meeting. Each of us, if we at the July meeting, had an opportunity to make comments ourselves. I know I made a couple of comments at the meeting. We have also had an opportunity to comment as individuals through your web site had we chose to do so and I know at least one of us did, not me.

So, I think that the main additional purpose we might be able to serve is in weighing in on sort of the synthesis of the comments that you are getting around which ones we think -- you may have heard things from one person or you may have -- you may see an emerging consensus in some other areas. I think that may be where we might be most useful is to help think about those areas of consensus.

I am sorry. Did you --

DR. ORTIZ: I was just going to comment. Having not been involved initially but just hearing this for the first time and being part of AHRQ, I tend to be a real pragmatist about what can really be accomplished and what is going to do the most good in terms of effectiveness and efficiency. It seems to me it would not be very efficient to generate a whole new report doing recommendations. I mean, it seems like there are things you could do. If it hasn't already been done, a summary of the hearings could be obviously presented.

There could be a short document that just has whether there may be additional recommendations that we made, you know, on top of --

MS. COLTIN: Usually it is just a letter.

DR. ORTIZ: Yes, I mean, it could be a letter or something like that. Then, just as we were talking about, to the extent that this group can be involved from this point on, kind of more efficiently and effectively, whether it is being somebody, one or two people being represented on some of their work ongoing between now and then in terms of sitting at some of the meetings, whether it is just reviewing some of the documentation, things like that, that seems like it would be a more efficient and effective way versus us trying to sit there and generate another report, which is very time-consuming and would take a long time and may not be very effective by the time we get it generated.

MS. COLTIN: I actually had no intention of suggesting we generate a report. What I was suggesting was maybe a letter, you know, a one or two page letter that says here are some of the key points that we heard that we think AHRQ should seriously consider, you know, based on what we heard and that we endorse these recommendations that we heard. That would be it. So, it would be kind of culling through a lot of the comments to identify those that we believed represented not just a consensus, but a valid consensus, you know, around an issue that we believe is important and of which AHRQ should take note in making its decisions.

So, that would really be more the process kind of thinking and we could do that now, based on just testimony, but if we were to do it based on the broader input, then we would need to wait for this document that you said would be available in about a week.

So, I guess, you know, my sense is what would the preference be? Would it be worthwhile to just get a letter from us saying these are the things we heard at the testimony that we think are important and that you really ought to consider and we support these recommendations that were offered at that time or that we do it based on a broader range of input, including your public comments.

DR. EDINGER: Would it be helpful to you also to have like what you can do now versus what you can do in the future because, obviously, some of the things in three months you are not going to be able to do anything with. There are other things that might be useful for the next reiteration. Would you want that now or that might be more helpful at a later date?

MR. KELLY: No, I think that would be good. My own feeling is that it would be real helpful to have your opinion on the full set. I mean, like I say, the only reason we have these two and you don't already have all that other stuff is because -- originally, our original plan was just to have this one hearing and follow-up pieces to it.

And, you know, just having gone through some of the feedback, a lot of it -- you know, we ended up getting a lot more than I sort of expected, but the -- and it really ranges from very specific discussions of exclusions on certain measures to more broad discussions of, you know, we hope that AHRQ will consider really developing the areas of safety, timeliness and patient centeredness, which are much less well filled out in the matrix kind of thing.

MS. COLTIN: I think that is probably more the level at which we would focus our comments is, you know, much more the broader issues. I don't think we are going to get into specifications for measures.

MR. KELLY: Well, you know, being -- guidance, that would be very helpful and also allow us to say, you know, here is a group that really knows some of these issues and you can see what their recommendation was in looking through all this information because right now we are sort of struggling with -- you know, normally when you get feedback on these things, people took lots of time to do it and you really want to acknowledge it and there are going to be things that we won't be able to act on just because of data limitations and other things, some of which were talked about just earlier today.

We need to be able to kind of talk to people about why we weren't able to consider those -- what future reports could potentially talk about. You know, coming from us and, you know, from a body like this group, it just gives it a lot more --

MS. COLTIN: Just one comment. I think originally when AHRQ approached us to do this we had talked about how we might work with the National Quality Forum potentially in tandem. I think, you know, the kinds of comments that you mentioned about specific comments about specific measures, it might be best to channel to them to get advice because often they relate not so much to methodologic issues or even data issues, but more content expert issues about, you know, you ought to include these patients because -- then it is a clinical basis for a recommendation as opposed to a methodologic one.

They tend to convene, you know, groups with the right expertise to try to achieve consensus from those kinds of issues. I know your time line is short and -- so I am not sure how helpful that advice is, but my sense is that they may be the more appropriate channel for that type of an issue than us.

DR. LUMPKIN: And time lines, I think if we are going to do something formally, we should --

MS. COLTIN: I think we would probably --

DR. LUMPKIN: -- the November meeting would be the time to get it out of the full committee, although, obviously, you would be able to see what we are working on in the workgroup.

DR. HOLMES: I think it would be helpful for us to have some sort of general response because in a way the particulars that we have just been discussing today are illustrated and the problems that are being encountered with both the National Quality Report and the National Disparities Report --

MS. COLTIN: I think it would be helpful, too. If we do go on the record with recommendations around that, we can actually incorporate that --

All right. So, then let me suggest a process of

-- Ed, when you have that report ready, if you can send it to Debbie Jackson electronically so that she can distribute it to the committee and staff, the workgroup, us, and staff. What I will suggest is that we actually go through a similar process to what we are going through with these recommendations, that we start out by going through it and identifying the ones that we think are really key, that are appropriate for us to weigh in on. We will come up with a collapsed list and then we will discuss how we want to weigh in on those.

We can do this via iterative e-mails and then have something to talk about at the November meeting. This is what we appear to be converging on, that these are the issues we would like to make recommendations about and these are the kinds of things we think we would like to say. We can review that and get a letter out by the end of the November meeting.

MR. KELLY: -- I will be able to indicate -- that sounds great -- indicate when -- I meant to mention one other thing, our three strategies that we have employed in getting feedback and you will see this then in the binder or in the -- you know, the electronic binder. We had this focus meeting in Chicago and invited a set of people and it was relatively for the hearing, but some key organizations. We then had the kind of comment that anyone could send us information from the Federal Register and from our web site.

We had talked about -- and Kathy agreed -- you know, the idea of having some sort of intermediate strategy where you kind of targeted a set of people that you thought would have good input and know about the issues, that could really give you some good comments. David Classen(?), who is actually on Envisioning the Healthcare Quality Report on the Institute of Medicine committee gave us advice on the structure of the report. Ed also suggested that we might want to get back in touch with the IOM committee members. So, we ended up sending an e-mail to the IOM committee members for the Envisioning the National Healthcare Quality Report Committee, the Bridging the Quality Chasm Committee and the National Healthcare Disparities Report Committee, all of whom are implicated to a greater degree or a lesser degree in what we end up putting out in September and the feeling being if we implicate them a little bit more by asking their opinion and actually getting it, it helps us with having a group of knowledgeable people out there when the report kind of hits the street.

MS. COLTIN: So, that will be incorporated in --

MR. KELLY: That will be in there as well. It will be listed by the measure set and broken out by the different categories with the persons or organizations named that contributed that particular comment.

MS. COLTIN: That will be really helpful. Does that process sound reasonable to people? Time to review and comment back -- okay.

MR. KELLY: I will mention that it is not implied, but that, you know, giving us -- helping us with this doesn't mean that NCVHS becomes a defender of the National Healthcare Quality Report, too. It also means that we will have more people sort of knowledgeable about why we made the decision to include or not include things.

MS. COLTIN: Okay. Well, good. Given that, I think we have pretty much come to the end of our agenda and we are able to -- do we have one more thing?

MS. JACKSON: Question about the full November meeting. At one point you were considering the idea of a panel.

MS. COLTIN: I would hold off on that at this point. One of the things we were just talking about as we were starting to go through the data issues around -- you know, discussing our own report, previous agenda topic, were some areas where we might potentially want to have additional hearings or we might want to ask other subcommittees to have hearings.

So, I think at this point I would say I would rather use whatever time might have been allocated to a panel to have more workgroup time on the agenda, if that is possible. Because we have a lot to cover at our November meeting.

MS. JACKSON: Two hours or so for the workgroup breakout and then nothing --

MS. COLTIN: Whatever you can give us, but at least two hours.

Thank you. So, I will give you a little more break time between now and when the full committee meeting resumes.

[Whereupon, at 9:45 a.m., the workgroup meeting was concluded.]