Hubert Humphrey Building
200 Independence Avenue SW
Washington,
D.C.
Topic: Medicaid Managed Care
Lisa I. Iezzoni, M.D., Chair
Hortensia Amaro, Ph.D.
Richard K.
Harding, M.D.
Vincent Mor, Ph.D.
George H. Van Amburg
M.
Elizabeth Ward
Carolyn M. Rimes
Olivia Carter-Pokras, Ph.D.
Dale C. Hitchcock
Ronald W. Manderscheid, Ph.D.
Opening Remarks and Introductions: Lisa Iezzoni
Overview from Selected Federal Agencies:
Dr. Eric Goplerud
Rhoda
Abrams
Gail Janes
National Perspective Panel:
Neva Kaye
Grace Gorenflo
Legislative, Consumer and Advocacy Panel:
Robert Grist
Rep. Lee
Greenfield
Cheryl Fish-Parcham
State Panel:
Nancy Clark
Lorin Ranbom
Bob Brewer
DR. IEZZONI: I'd like to get started, everybody. Could the panelists for the first panel please come up and sit at the table?
Are our panel folks all set? We've got a person for each name tag, except we are still waiting for our one person.
This is the Subcommittee on Population Specific Issues of the National Committee on Vital and Health Statistics. We are here today to talk about Medicaid managed care.
I want to first thank Carolyn Rimes and Jason Goldwater for putting together this program, which I think will give us a very interesting two days' worth of perspective of people from around the country.
What we will do in a minute is go around the room and have folks introduce themselves. But I believe that this is being broadcast over Internet, is that -- no? We were not able to do that, okay. We had tried, but that didn't work. So why don't we get started by going around the room and introducing ourselves. I'm Liza Iezzoni from Beth Israel Deaconess Medical Center in Boston.
DR. AMARO: I'm Hortensia Amaro, professor at Boston University School of Public Health.
DR. MOR: I'm Vince Mor from Brown University.
DR. MANDERSCHEID: Ron Manderscheid from the Center for Mental Health Services. I'm the staff person to the Committee for Mental Health.
DR. SCANLON: I'm Jim Scanlon from HHS. I'm executive director of the National Committee.
DR. GOPLERUD: Eric Goplerud from SAMHSA.
DR. ABRAMS: Rhoda Abrams from the Center for Managed Care in HRSA.
DR. JANES: Gail Janes from CDC in Atlanta.
DR. KAYE: Neva Kaye from the National Academy for State Health Policy.
DR. GELLMAN: I'm Bob Gellman, a privacy and information policy consultant.
DR. GREENBERG: I'm Marjorie Greenberg, the committee's executive secretary.
DR. HITCHCOCK: I'm Dale Hitchcock from HHS. I'm staff to the committee.
DR. WARD: Elizabeth Ward, a member of the committee from the Washington State Department of Health.
DR. HARDING: Richard Harding, a child psychologist from South Carolina.
DR. IEZZONI: Jason, do you want to start?
DR. GOLDWATER: I'm Jason Goldwater, and I work for the committee.
DR. IEZZONI: Yes, he along with Carolyn. Carolyn just came back in the room, and I want to make sure she hears her thanks publicly. Jason and Carolyn put together this session.
DR. BREWER: Bob Brewer. I'm with CDC in Nebraska, the state Clinic of Disease Epidemiology.
DR. IEZZONI: Ann?
DR. MARGIS: Ann Margis with GW.
DR. ROSENBAUM: Sara Rosenbaum with Washington University.
DR. MELMAN: Mike Melman with Health Resources and Services Administration.
(The remainder of the introductions were performed off mike.)
DR. IEZZONI: Great. Here is the deal. This is how this is going to be organized. We have a series of panels that are hopefully grouped in some sort of logical order. Each of the panelists has an extensive list of questions that we mailed them to respond to, but hopefully they will be able to tell us the answers to one or two of those questions.
The one that we are first most interested in is basically, what do you all want to know about Medicaid managed care? As I said, the panels are organized by different perspectives. We have government, we have private payors, we have patient advocacy groups and so on, so hopefully we will get over the next two days a very wide range of perspectives. Then what we will do is hopefully have a chance for a discussion with the committee and members of the audience.
So why don't we go in the order in which we are listed on the program? Rick, do you want to get started?
DR. GOPLERUD: Can I speak from the --
DR. IEZZONI: Yes. We want to make sure that you're miked, though, because you are going to be transcribed.
DR. GOPLERUD: I can probably talk loud enough to be heard.
DR. IEZZONI: No, you're going to have to use the lavaliere.
DR. GOPLERUD: Well, with the extensive questions that the committee asked, and 15 minutes to respond, I assumed that this was only the high points. So what I did was bring you voluminous amounts of paper.
The question about what performance measures to collect, and to what kind of standards should we hold Medicaid managed behavioral health care programs accountable I think pretty much fits the phrase which has been attributed, probably apocryphally, to Mark Twain, who said that data are a lot like garbage: you have to know what you're going to do with it before you collect it. I think that there is an awful lot of collecting or intention to collect without really knowing what we are going to be doing with it.
When you think about why states or anyone would want to get into the managed care business, there are a couple of core questions. Your data should in some way flow from what the questions are. What is your diagnosis of the problem for which Medicaid managed care is an answer? Well, it could be that there is a question of access or equity, that not enough people are being served, or not the right people are being served in an equitable fashion.
Expense. Medicaid is too expensive or there are competing priorities for government funds for which health care is not as high.
Or a third could be accountability, that we're not getting a health care system that is sufficiently responsive or accountable for the resources spent, the services provided and the results achieved.
Now, if these are the problems, what then are the goals? I am looking at this from the perspective of mental health and substance abuse services, but I think it also applies to Medicaid in general.
Well, a goal for health care reform could be to constrain costs. For these various goals or answers, managed care may or may not be a good answer. To constrain rising costs, it appears from private sector and initial public sector experience that managed care probably is as it is currently practiced a fairly good answer to the question of, can we constrain costs.
To answer the question, somewhat of an accountability question, of privatizing public services, again, managed care is a reasonable way to privatize that. Expand coverage or improving access, improving management and accountability. These things probably managed behavioral health care can probably do. Redress historical under funding of care, a continuous issue or a major issue in behavioral health, managed care is very unlikely to be able to do that.
I did pass out, or there should be passed out also copies of these overheads.
To protect special or vulnerable populations is very much a question. Managed care appears to work particularly well when you integrate or pool resources. We come again and again up against the problems of artificial barriers of post-hole funding or smokestack, or whatever the analog. The problem though is, can we protect either vulnerable populations who have been specially at risk for lack of access to services, or to particular classes or types of services which have not done well in a competitive marketplace. The question is still out as to whether managed care is an answer to that problem.
To resolve conflicting or overlapping care responsibilities, we found that managed care is almost incapable of dealing with this issue. The issue must be dealt with politically up front; it is not something that managed care can deal with, or post-hole funding or post-hole government.
Now, performance measurement should match the problem identified and the solution attempted. Sara Rosenbaum and her colleagues at GW have been doing studies of Medicaid managed care contracts and specialty studies of Medicaid managed behavioral health care contracts.
To summarize and simplify tremendously from what she has done and what she and her colleagues are continuing to do on an annual basis of the '96 and the '97 contracts is to -- just in the area of performance measurement. The performance measures required in the contracts are frequently vague, data requirements are missing, reporting requirements are vague, and sanctions and incentives for performance or reporting on performance measures are often missing or vague. This is summarizing and simplifying a vast amount of information.
What Sara anecdotally told us is that in her next set of reports, she is going to issue a complete volume on the behavioral health performance measures, because there are so many of them and they so often conflict.
Now, among the materials that you have are a chart which I didn't bring up here with me. It is a matrix, and what it describes is an effort that SAMHSA has had underway to develop core performance measure sets. We have at least four efforts, which are all focusing on deriving some consensus and developing some experience with core reporting sets.
The mental health statistics improvement program, which Ron Manderscheid is the lead, has developed a consumer-oriented report card, which is currently undergoing extensive testing in, I believe, 30 states presently.
The three state national associations of state mental health, substance abuse and Medicaid directors approached SAMHSA and HCFA and said that the one core area that they all felt was a major deficit was that they did not know how to hold the Medicaid managed behavioral health care programs accountable. They came to HCFA and SAMHSA and asked for funding to develop consensus around a core performance measurement set that could be used, or at least those groups could assess the feasibility of getting core consistent reporting across state mental health and substance abuse in Medicaid and the Medicaid managed behavioral health care contracts. This has been funded by HCFA and SAMHSA, and you have a copy of the report, set of indicators coming from that.
The American College of Mental Health Administrators -- you have a copy of their draft final report -- engaged in a process where they brought together all of the various groups that have been developing mental health report cards relating to managed care, including the Institute of Medicine, who has made a report, the consumer-oriented report card from MHSP, the trade association AMBA's report card set called the PERMS, FAC, the Foundation for Accountability. I think that's all. There may have been some others.
They said, okay, it is very expensive to tool up for a different set of reporting requirements for every purchaser or to have competing or overlapping. Is there a core set which we can winnow down and say ought to be in any kind of a managed care system? You've got a report that came from that.
Finally, the National Committee on Quality Assurance, responding to the pressure that they have received from purchasers and from consumers, have responded by developing a behavioral measures advisory committee to the Committee on Performance Measures. The reason that the NASADAD, NASMHPD, APWA project was undertaken, in the words of Lee Partridge, is that the HEDIS 3.0 or Medicaid HEDIS is an inadequate measure for contracting for Medicaid managed behavioral health care services.
The NCQA/BMAP just had a meeting the end of last week, where they were trying to develop a consensus, or try to identify measures to be added into the next two versions of the HEDIS.
What I thought I would do is to show to you places where there is convergence between all four of those reporting sets, core performance sets. Each one of these is reporting or attempting to find a core set. Then if you look across all of them, what are a core set of the core sets.
They all agree on three domains in which there needs to be reporting: access or availability, appropriateness or quality of care, and outcomes or results. In the access-availability area, two in particular have been identified over and over again as being the core: penetration or utilization rates and variously asked for by age, sex, race, adult mental health, child mental health and substance abuse. There is an interest in rates of identification or diagnosis, and then rates of engagement.
Part of this has to do with, in behavioral health, we typically have services provided in managed care through carveouts, but an awful lot of the identification and referral needs to go on in the primary health care setting. So what are the rates of identification?
Now that we have reasonably good data on epidemiology of mental health and substance abuse problems, not as good for children, but reasonably decent, coming from the Center for Substance Abuse Treatment needs assessment, the national household survey on drug abuse, the national comorbidity study of Kessler's, we can start comparing identification rates and engagement rates with epidemiological rates and holding plans accountable for moving towards matching better their access.
Also, continuously there is a consensus on consumer perception of accessibility.
The second area or domain of appropriateness or process or quality comes up in part because the relationship between processes and outcomes is not as strong as one would hope. In addition, there are some areas where the process itself is highly valued, whether it has been empirically demonstrated to be associated with outcome.
For example, consumer participation in their care is a value which is very strongly held in the behavioral health mental health community. Family or guardian participation in the care of their children is a value highly held, which the field would want to know whether it is being -- whether consumers are involved, whether it is associated specifically or dynamically with outcomes. Similarly, protection of data or protection of medical records is something that is highly valued, whether it is associated with outcomes or not.
The kinds of issues which are identified in all three of them are up here: contact within seven days following discharge from a hospital, family or guardian involvement and consumer perception of the quality or appropriateness of services.
Finally, outcomes or results. Behavioral health I think is tracking closer and closer to primary health as perceived in the major commercial sector, which is that what is critically important is not so significantly reduction of symptoms as functioning. Consistently, employment, ability to engage in meaningful daily activities for adults and for children is seen consistently as a priority, level of functioning and consumer perception of outcomes. For substance abuse, there is a significant concern about symptom reduction, however, use reduction.
Now, the subcommittee and the full committee has received numerous reports from SAMHSA and from various SAMHSA components about behavioral health performance measurement activities and plans. I think having Ron Manderscheid having gone to various meetings of this group before, and Ron has scheduled a large number of presentations describing what is going on in the field. As well, Dr. Jack Buck made a presentation at I believe your last meeting four months ago, and the Institute of Medicine's report on performance partnership indicators was released last year and reported to this group.
Now, I could briefly review the kinds of activities that we have underway. But I think that I have given you paper, if any of you are obsessive enough, on your airplane back to read them. I would like to turn over to -- I think Rhoda is next, and we can have discussion about some of the issues.
DR. IEZZONI: That's great. Can I just ask though if there are any questions of clarification for Eric, areas that people were confused about? Ron?
DR. MANDERSCHEID: In each of the three areas that you identified, access and appropriateness and outcome, there is an issue in behavioral health care that might not be as clear in the other health areas, and that has to do with cultural accessibility, cultural appropriateness, cultural definition of outcome. Can you comment on the SAMHSA work in that area?
DR. GOPLERUD: I'm not sure that it is peculiarly behavioral health, but I think one of the places that it shows is that the behavioral health services, because they have been chronically underfunded, issues where access to services is a difficulty, behavioral health for cultural, ethnic minorities for gender differences, for sexual orientation differences, is a major problem.
The Center for Mental Health Services has really taken the lead in a number of areas in convening work groups, has a contract with the New York State Research Institute to develop performance indicators or to look at a set of cultural competency guidelines that have been developed by four panels, a Native American, Alaska Native, an Asian-Pacific Island, Latino and African-American work groups, all of which have identified cultural competent performance measures or culturally competent performance. Now these are being built into performance measures.
But I think that perhaps any of the chronic diseases would have these issues. I think the issue just becomes starker in the behavioral health area because of the under funding and some of the stigma attached.
DR. IEZZONI: Question of clarification?
DR. AMARO: I was just wondering if there were any measures of capacity that were recommended, whether services are available, provided by a particular managed care provider. Do they have the different types of behavioral health services available? I am referring primarily to the recommendation from the Institute of Medicine report, that process, outcome and capacity measures. I was wondering if something similar came out in the recommendations of the groups you have worked with.
DR. GOPLERUD: An anecdote -- and I don't know if others of you have heard it from other managed care executives, is that I don't care what is in the benefit, as long as I have control over medical necessity and utilization management.
We have focused a lot in the behavioral health area in trying to assure that the comprehensive continuum of care is in the contract and is available. However, medical necessity and utilization review have been the ways that people have been excluded from care.
I think the place that we have to look at is whether people are actually getting the services, so it is much more an access and utilization question, rather than a capacity question.
Now, one of the issues that I think we need to be very, very concerned about in behavioral health is that there have been a number of very well publicized acquisitions and mergers over the last year or two. The firms that have been aggressive in this area have taken on massive amounts of debt. These are publicly held programs, companies that have large amounts of debt, and are going to need to squeeze down on costs in order to keep their stock prices up and to service their debt.
You can get some savings by reducing your marketing staff in half, because maybe you don't need as many of them. But you are going to get most of your savings by decreasing access to the comprehensive array of services. I think that is an issue that we have to be very, very careful about.
DR. IEZZONI: Great. I think these kind of questions are going to recur, so -- yes?
DR. WARD: One last clarifying question. I'm not used to the term engagement rate. Can you redefine that again for me?
DR. GOPLERUD: Yes. Do people actually show up for service? If they are identified, do they actually receive services? A major problem, particularly in the substance abuse side, where denial of a need for care is a very great concern. Research consistently shows that engagement and length of engagement is highly correlated with positive outcomes.
DR. WARD: Thank you.
DR. IEZZONI: We'll have more questions for you, I'm sure, when we get to the discussion period for this panel. Rhoda, you're up.
DR. ABRAMS: My name is Rhoda Abrams, and I run the Center for Managed Care in HRSA. I'm not sure everyone knows what HRSA is or who HRSA is. I'm going to start off with a little explanation of it.
Suffice it to say that in HRSA, we view the issue of Medicaid managed care as one that affects the Medicaid population that is being served as well as -- since it is by definition a low income population, and by definition a population that goes on and off Medicaid, we are equally concerned about the patients when they are on Medicaid as well as when they are off Medicaid, because it can be -- they are on for four or five months and then they're off. So you will see in the presentation here a particular emphasis on what is happening to the uninsured, because that is one of the issues that affects the outcomes of the Medicaid services.
The other issue that I wanted to say up front is that, when we look at managed care, and we are looking at health outcomes and what is happening as a result of the delivery system changes, we actually are very concerned about the delivery system, because the availability of the system really directly has impact on access to care and the outcomes of care. So we spend a lot of time looking at what is happening to the delivery system.
Now, I know for this group, that may or may not be directly relevant, but we decided to bring it to the group. We do not collect as an agency the kind of data on all the Medicaid populations that are served through the programs that HRSA funds. We do have different data systems. We have several legislative authorities, which I will describe.
So unlike HCFA, which has access in some programs to direct encounter data, we don't have that for the Medicaid population per se. What we have is various data reporting systems for the programs. We fund grant programs that provide access to care and create delivery systems.
So our data is usually on the populations that are served, Medicaid or not, through the grant programs that are funded, but it does include in some instances identification of the Medicaid populations. And it does relate to the disease entities that we fund or to the population by definition that is authorizing the legislation.
Also, on the other side, I want you to know that we work with HCFA a lot to try to look at what we have in our data systems and what they have in their data systems, to see where we can make a match and where we can take advantage of theirs, and hopefully in some instances they can take advantage of ours.
So with that introduction, what I'm going to talk about is emphasis on the delivery systems and also I'm going to address health outcomes and the Medicaid data systems -- and the data systems that can produce some information around heath outcomes and the lack thereof.
I'll try my best to get through all of these. HRSA, Health Resources and Services Administration. The mission of our agency is to assure equitable access to quality care for underserved and vulnerable populations which by definition includes the Medicaid population, mostly low income populations, but also populations that have difficulty accessing services.
Here are our programs that have this mission in mind. The maternal and child health program, which I think most people are familiar with. That is the block grant plus the Healthy Start program, emergency medical services for children. It is the major governmental Public Health Service program. Community, migrant and homeless health centers, again, this is a grant program to individual communities rather than to the state to create delivery systems for basically underserved populations that don't have access to particularly primary care.
Then the Ryan White AIDS is the large funding source of grant dollars to states, cities and local communities to actually provide services, a lot of primary care services, specialty and enabling services, home health care, basically services that allow the patient to access services. That is now up to a billion dollars.
There is a special demonstration program in the Ryan White program for managed care and AIDS, so there are some demos around that issue.
Rural health. Again, these are to create delivery opportunities in rural areas, where people have trouble accessing care. In addition, it includes telemedicine, high technology ways of improving access.
Then the health professions training is not a service program, but does provide funding to actually help redistribute, do a little redistribution of professionals.
Now, what I'm going to talk about for a few minutes is the delivery systems under Medicaid managed care. This is of particular concern in HRSA, and also rather broadly, because it does affect the outcomes of the populations that are served under Medicaid.
What I am talking about is safety net providers. These are the delivery systems that not only we fund, but are available through city and state dollars. Safety net providers around the country -- now, we are concerned about the relationship of Medicaid managed care and the safety net.
Now, safety net providers are essentially providers who by definition provide substantial services to uninsured and otherwise disadvantaged populations. So people have trouble getting access, which of course includes the Medicaid population, but I want to focus also on the issue of the uninsured, which is a related issue.
By definition, we are including public and other hospitals that do a lot of this, community health centers, rural clinics, health departments, AIDS provides, maternal and child health providers. They serve uninsured and Medicaid, and these are the populations that are served: low income, children, pregnant women, homeless, mentally ill, substance abusers and elderly poor.
The main source of -- they provide primary care, hospital and out-patient care, the full range of services that -- and Eric described some of those services, but they are essentially for the underserved Medicaid populations.
Now, the problem that we are having these days under Medicaid managed care relates -- if you take it a few steps down the line, relates to health outcomes of the populations, because the safety net is under stress. The availability of care for the uninsured and for some of the Medicaid populations is now changing, and this is one of the issues that we wanted to bring to the committee and relate it to the health outcomes and data issues.
What we are finding is, for all those public hospitals and health centers and health departments that have traditionally served the populations, they are accustomed to serving a significant Medicaid population. But the reduced Medicaid revenues for managed care is becoming a major problem. Reimbursement rates are down, and there are the beginnings of the reductions in the Medicaid eligibles.
That is fairly consistent around the country. The rates are going down, the providers are having difficulty keeping up, being able to provide services to the Medicaid, and then the uninsured are particularly being hurt by it.
The increase in the number of uninsured served. Now, there is some data now that says there are more uninsured, and that the uninsured seem to be concentrating in fewer and fewer providers.
In health centers, for example, our data show that the percentage of the population that is uninsured has gone up 46 percent over the last six years, where nationally, the rate of uninsured has gone up 20 percent. So we think that the uninsured are concentrating in fewer providers.
Part of the reason is that there are mergers, as Eric was describing. This privatization that is decreasing the capacity of the safety net. Public hospitals are bearing a greater burden; the data shows that, and has reduced out-patient provided capacity. There is some data now that shows that.
I'm going to quickly give you some data, some information, about one safety net provider that we have data on, very specific data, and also relate this to the issues of health outcomes and quality of care.
First, let me just talk quickly about health centers, one of HRSA's funded organizations. These are private for-profit organizations, and they are safety net. They provide services to the uninsured. They are community-based with local governing boards. They are in underserved areas, and they provide essentially preventive and primary care. There are 685 of these organizations with 3,000 delivery sites, so they are fairly pervasive around the country.
The populations that are served are essentially -- mirror the Medicaid population. So they are mostly below 200 percent of poverty, but by ethnicity, you can see, this is a breakdown of the ethnicity of the patients served, which is very -- it tracks what the Medicaid population is. We have 27 percent African-American, 35 percent white, 31 percent Hispanic. So what you have is a heavily minority population, essentially below 200 percent of poverty.
Now, Medicaid -- the AFDC or the TANIF users are essentially women and children. There are the aged and disabled and now the supplemental security program is heavily funding under Medicaid the AIDS populations. So for us, when we see most of our -- 60 percent of the AIDS population are paid for under Medicaid, so that becomes a major Medicaid issue in terms of the population being served.
Here is for a primary care system of health centers. Forty-two percent are children, 32 percent are women of childbearing age and 65 percent are minority. This is what a safety net provider normally sees, except for now the emerging AIDS population, which is really concentrated in a limited set of providers.
One of the things that is happening with most of the safety net providers is, they are participating in managed care. The public hospitals are, the health centers are and now we see more and more Ryan White providers contracting in the Medicaid managed care. So it is not as if they are being removed from the Medicaid managed care; they are included in it. For health centers, you can see a significant dramatic increase. As Medicaid managed care is being required for Medicaid patients, the safety net providers are increasingly participating.
That is a good sign. I think that in HRSA in particular, we have a major effort to make that happen. The more it happens, the more they become able to serve the populations.
The difficulty though is that -- and I think this is what I was trying to say before -- is that the uninsured are certain taking their toll. This is just health centers alone, and you can see the difference between the health center population and the national trend.
This is data that is tracked in the agency. This is also seen in the AIDS program as well. A lot of this has got to be in maternal and child health, because it is the same population. So what we are having is stress here, where participation in managed care is one side, uninsured on the other side, and the delivery system is really being stressed out.
Now, what we have been doing is, given this as the delivery system, is trying to then relate it to the health outcomes that are occurring in the programs, tracking changes. We understand this particular committee may be interested in population-based types of data as they come in.
We have had significant discussions around the relationship of the data systems that the managed care organizations actually use with the state Medicaid agencies and the data systems that we have in our agency. There is of course differences.
We have for example tracked population changes, insurance changes, changes in health outcomes in some of our programs. In the maternal and child health they have developed some performance measures for the state maternal and child health block grants. In the Ryan White program, we are looking at major changes to collect more specific information around the populations that are served in the programs. But that is irrespective of whether it is a part of the Medicaid reporting system.
But we as a result of changes in the delivery system have identified some major issues that we see now happening as some of the safety net providers are beginning to change the range of services that they are offering. We think this has significant implications for the populations, because one, the reduced revenues is a major issue. You cannot continue to provide the same amount of services with reduced revenues for Medicaid. The other sources of funding are not significantly increasing.
So we are beginning to see reductions in range of services. The reductions that we see at the beginning, of course they are trying to save the medical services, so what we see is changes in the enabling services, what we call the transportation, the translation, the health examination division, the support services, the services that are not necessarily funded by a Medicaid managed care program.
Those are in these delivery systems among the first to go.
We do not have data on the extent to which the managed care companies have expanded this range of services. We know that in some state programs, they have required essentially as part of their contracting process some of the culturally competent services, the translation services have actually been increased in some places, as a result of the contracts that the Medicaid agencies have let. However, this is not across the board, and there is no actual data that shows to what extent these services are being withdrawn.
In our delivery systems that are funded out of HRSA, they actually have to be cut back because there is just limited funding. So what we are seeing is a change in what we call access to certain kinds of services.
In terms of looking at the total system, we are looking at a small segment of the system, and we think that it is important to look at the total system. With less money, as Eric was pointing out, there is less money in the Medicaid delivery system than there was say five years ago, and because of the reductions in money, there is a debate as to whether or not that is having any effect. The debate is still going on, because there is not yet much data on it.
But what we are seeing in our limited set of programs here are, we want to track the changes in access to primary care services, specialty services, and we think there is a big difference between primary access and specialty access, ancillary services, hospital, which we think is probably still fairly widely available, and enabling services, which we consider for this population to be absolutely essential.
We think that it is important to look not only at the Medicaid eligibles, but also to look at the uninsured, because that definitely has a quick interrelationship between the health outcomes of what is essentially the same population.
Changes in health outcomes that we have started to look at actually track what the HEDIS program is beginning -- what the HEDIS measures will look like. We are not confident that HEDIS will be widely implemented. HEDIS is an expensive process. HEDIS is a difficult process, although the measures we consider to be really on target, at least for primary care. We are concerned about the wide implementation. It is what I would say to be determined. It is not yet fully implemented, and it is not yet widespread, but as it goes along, we are concerned about that.
Now, we have in HRSA a major HEDIS training program, to train all the providers that are funded out of HRSA, on how to participate in the HEDIS activity, which is a fairly challenging task in itself.
Among the problems that we see in getting at the measures is the data sources. This we think is a real major problem, because once we are into managed care, we are going to lose some access, we think, to encounter data.
The states don't necessarily all uniformly collect encounter data once the system goes to managed care. As a matter of fact, some of the HMOs don't consistently collect encounter data once they go to managed care. And encounter data being the heart of what we need to look at in order to look at the units of service, the aggregate case mix information.
That is where we think the major difficulty is going to be presented to us. Encounter data is sometimes collected by plans. Providers usually collect use their own encounter data, but don't necessarily report it out of their system. Some of the managed care companies require aggregate kind of reporting or reporting along a different line than an individual encounter. It really varies from plan to plan.
So to the extent that, if we lose access to encounter data, we think we are going to lose the ability to monitor and analyze some of the outcomes.
Claims data is also a major source of information which under the fee for service system, was the way we collected all kinds of information. Claims data under managed care is limited, because it is a combination of a capitated and a fee for service system. Claims is not always available from the provide level, let alone from the plan level.
Then of course, we think enrollment data is a major source of information, just in terms of who is enrolled, the demographics, the income level, the age, the sex breakdown. And actually, I would include in that specific diseases that we are most concerned about.
So I know that HCFA has grappled with this, because under 1115 waivers they do require encounter data. Whether the actually get it from the states is a big question. But remember, 1115 waivers are only in maybe a dozen or a dozen and a half states, so whether the rest of the states will be able to come forward with this kind of data, or there is a substitute for it, maybe that is something we can look at.
So those are our concerns.
DR. IEZZONI: Great, that was very, very helpful. Questions of clarification, quickly?
DR. HARDING: In the encounter data that isn't there, is that a bookkeeping issue or is that a proprietary information issue?
DR. ABRAMS: It is not a bookkeeping issue. It is a question as to how the managed care companies actually manage internally.
DR. HARDING: Right. Are they reluctant to give that because that is a proprietary --
DR. ABRAMS: Oh, are they reluctant to submit it? Well, they may be reluctant to submit it because it is proprietary. They also may be reluctant to submit it because either they don't collect it, because if they give a capitation, some of them say, I gave you a capitation, I don't need to know anything else. Then they also may find it expensive. It is a more expensive approach, and of course as Eric pointed out, it is not only the behavioral companies that are into this for the bottom line; it is that every extra function adds to the cost of managing the company.
But actually, there are some companies who live and die by their encounter data. So it really depends on the company and how they decide to manage it. Blue Cross will manage it one way, and United Health Care has a different way. U.S. Health Care is probably the data maven of the world.
DR. IEZZONI: Great. A quick question of clarification?
DR. MOR: Clarification. You mentioned that Medicaid rates are dropping, but you didn't say why. Is that people the managed care companies are taking a cut, or because states are actually paying less per capita, per person, per capita?
DR. ABRAMS: Our reading as to what is actually going on is that the states are actually reducing their rates to the managed care companies. It is not every state, but if you look across the country, it is a trend, where you find significant reductions. Sometimes the rationale is that the managed care companies have been paid too much money. New York, Michigan, Hawaii, it is just --
DR. MOR: But is it the per capita rates that are dropping? It is the per capita rate?
DR. ABRAMS: Yes, it is the capitation rates, right, per capita.
DR. MOR: But in many of those states, the number of people covered has also gone up?
DR. ABRAMS: That is a different issue, yes. What I was talking about is the per capita rates, which is different than the reduction in the eligibles, which seems to be happening. But that is a different issue.
DR. IEZZONI: Rhoda, that was very helpful. If you can stick around for the discussion, that would be great. Perhaps we could have copies of your overheads.
DR. ABRAMS: Sure.
DR. IEZZONI: Thanks. Gail, you're next.
DR. JANES: I've picked out a couple of the questions, as most of us have done, to speak to specifically. So I'll give you the question first and then give you my prepared comments to that.
In terms of the overarching questions that we at CDC have about Medicaid managed care, as an increasingly large proportion of the Medicaid population for whom public health traditionally provided a very large portion of care, has moved into managed care. CDC and their partners in the states are very concerned about several issues. Are Medicaid recipients receiving access to quality, comprehensive care, including the full spectrum of preventive services? Are they being supported and informed in such a way that they know how to utilize that care in appropriate ways and at appropriate times? And are the processes and outcomes of care being collected, analyzed and benchmarked against the population at large to insure parity in both health care access and health status?
CDC is also concerned about the indirect effects of Medicaid managed care on health care for the indigent and the under insured, who continue to be the responsibility of public health as the provider of last resort, but with drastically decreased funding sources for underwriting the cost of this care.
Because this population cannot be followed using traditional administrative encounter and enrollment files, we are concerned about the availability of population-based data systems which will allow us to track the health needs and status of this population, who could become the unwitting victims of the drive to discount.
Lastly, we are concerned that public health, which is charged with responsibility for assessing and assuring the health care and health needs of the population, be an active partner in the development and analysis of evaluative data systems designed to inform the Medicaid purchasing process. The categoric data systems of public health include much information on the outcomes and risk factors which are necessary to complement the detailed information on processes of care offered by administrative data.
Public health and Medicaid should come to the table as equal partners in efforts to insure the health of the Medicaid population.
The second question: does your organization currently collect data about Medicaid managed care? This will be the bulk of my comments. While CDC and our partners in state public health organizations do not support data collection initiatives with the primary goal of explicitly collecting data about Medicaid managed care, we have a number of initiatives that provide insights at various levels about the processes and outcomes of care, as well as the status of Medicaid managed care recipients in the larger population from which they are drawn.
Many of our data systems are based on surveys that include the state-based behavioral risk factor surveillance system, or BRFSS, the national health interview survey, HIS, the national immunization survey or NIH, and the newly launched state and local area integrated telephone survey, or SLAITS. All but the HIS are telephone surveys, and HIS is a household survey, and of course, these are only -- I've just picked out a few of the population-based surveys that come out of both CDC in Atlanta and in Hyattsville. But I think these are some of the surveys that speak particularly to the issues we are talking about today.
BRFSS, which is administered in all 50 states and produces state-specific estimates of a variety of health risk behaviors, as well as general measures of access to utilization of and satisfaction with the health care system, has been piloted in several states as a measure of utilization of preventive services in a Medicaid population.
The sampling frame of Medicaid enrollees was however provided by prior arrangement with the Medicaid bureau, since BRFSS does not collect data on health insurance status sufficient to distinguish this subgroup. The sample size under the usual sampling protocol would also be insufficient to this type of subgroup analysis.
However, this proof of principle project demonstrated the usefulness of the BRFSS for tracking the health status, health risk behaviors and health care utilization of Medicaid enrollees in an enriched sample.
In addition, results can be benchmarked against state level estimates for the entire population.
Simply, the national health interview survey now collects very specific data on health insurance status, as well as on utilization of the health care system in general, and preventive services in particular. This annual national survey is particularly useful for benchmarking use of preventive services as a function of coverage status. However, its measures of general utilization are limited, and none of the estimates can be resolved to the state level.
SLAITS was developed and is currently being piloted to address this last issue. If funded to cover all states, it would provide a very useful state specific measure of health status, health care utilization and access for the Medicaid population, with corresponding benchmarks at the state and the national level.
The national immunization survey is an ongoing telephone survey designed to produce state and selective urban area estimates of vaccination coverage levels among children 19 to 35 months. Because access to and utilization of the immunization services is often a focus of performance measures of the quality of care provided by managed care organizations, including those contracting with Medicaid bureaus, this provides useful population-based benchmarks for those plan-specific measures.
However, because of the difficulty of determining the type of coverage at the time of immunization, this survey is not useful for tracking the immunization status of the Medicaid population, regardless of the type of coverage, fee for service or managed care.
The traditional categoric data systems of public health, including birth, death and disease registries, which are supported by CDC funding streams and with CDC technical assistance are also crucial inputs to the Medicaid purchasing process, helping to define the population, its risk status and burden of disease, and are equally critical assessors of the Medicaid product, since they describe the outcomes of care which, when linked with the process data captured by administrative data sources, reflect plan performance under the contractual relationship.
Increasingly, CDC programmatic dollars and technical expertise are being routed to support these type of data linkage initiatives, which facilitate Medicaid purchasing and performance measurement and encourage the development of integrated data systems at the state level.
The move to increased standardization of data formats, data sets and coding and classification schemes is obviously an essential part of this process, and CDC is actively supporting these initiatives, both internally and within CDC-supported state based data initiatives.
In addition to CDC-supported surveillance and registry programs, CDC also sponsors a wide variety of one-time funding initiatives designed to support new and innovative data initiatives within the states. These often have a significant impact on the structure and use of existing data systems by exploring new ways of linking existing systems and new settings in which these systems may prove useful.
The ongoing assessment initiative is supporting data integration pilots in six states with several states exploring the usefulness of Medicaid and vital statistic data for support to Medicaid purchasing and performance measurement, as mentioned above.
The Medicaid checklist project, out of CDC's Office of Managed Care, is developing purchasing specifications and contract language for use by states in their Medicaid purchasing contracts. These specifications include sections on quality assurance and performance data reporting requirements, helping to insure that not only are the needs of public health addressed in Medicaid managed care programs, but that states also receive the type of information streams they need to accurately track the quality of the care they purchase.
As a corollary to this project, monies have been made available to support several state initiatives which examine use of public health data sets to assist Medicaid agencies in rate setting and benefit design for their Medicaid managed care programs.
Lastly, I'd like to mention some work sponsored by the BRFSS program in CDC's Center for Chronic Disease Prevention and Health Promotion, examining the efficacy and validity of a variety of telephone survey mechanisms for determining the health insurance coverage status of interviewees, and distinguishing between those covered by traditional fee for service insurance and those in managed care.
Results indicate that telephone surveys can easily and accurately assess whether respondents have insurance coverage or not, and that respondents have no difficulty distinguishing between primary comprehensive health care coverage and other policies in limited insurance programs. However, using a commonly employed mechanism for classifying insured respondents as members of managed care programs or not, subsequent validity checks indicated severe problems with these algorithms, which resulted in significant misclassification of respondents as to type of coverage. Clearly, use of telephone surveys to distinguish between the managed care and fee for service population must be done with caution.
This is an issue that I can't emphasize too greatly. This is something that has really bedeviled us both in Atlanta and also in Hyattsville, is this question of how to structure a question, particularly in a telephone survey, which will accurately determine whether a person, regardless of the source of their coverage, is covered by a managed care or a fee for service program. It is extremely difficult to get at this information, extremely difficult. Yet, it is obviously essential if you're going to design the kind of data collection instruments that we all want, that will look at Medicaid managed care and compare it with fee for service Medicaid.
In summary, CDC supports and enables numerous outcome data collection initiatives, both centrally and in the states. While many of these initiatives have and will continue to inform the dialogue around Medicaid managed care and provide the benchmarking data for plan-specific performance data, CDC data systems will not substitute for accurate, standardized individual level encounter and eligibility data collected from the provider.
Our surveys of hospital discharge data, ambulatory and emergency room care and provider patterns are enormously useful for health services researchers and health policy analysts, but do not provide the sample sizes, the longitudinal components or the linkages to outcomes that are required for an in-depth review of Medicaid managed care in a defined setting.
CDC is striving to support and encourage state-level initiatives that will broaden the usefulness of existing data systems through standardization and linkage to encourage collaboration between Medicaid and public health around these issues, and to fund the types of research projects which will guide decision making around development of new data systems, and more particularly, illuminate the functionality of existing systems in new settings.
Numerous state-based organizations are participating in this process, including the National Association of Health Data Organizations, the Association of State and Territorial Health Officers, and the National Association of City and County Health Officials, just to name a few.
CDC is committed to supporting the states in their efforts to construct the type of timely, comprehensive and responsive data systems that will inform the purchasers of Medicaid managed care and assure the health of the populations served.
That is the end of my prepared comments. I would just add to that, and sort of summarize. CDC is enormously concerned about this issue and about this population. In talking to a number of people around CDC in the last couple of days before coming up here to address the subcommittee, again and again, the same statements were made. We are concerned about the access to care of this population. We are concerned that they get access to good preventive services. We are concerned that they get the information, so that they can utilize those services in an accurate and appropriate way. We are also very concerned that public health be an active partner in determining what those services are going to be in, and interacting with Medicaid in drawing up the contracts, in collecting and sharing the data.
However, in looking at our own data systems, a large majority of the data systems that CDC sponsors are surveys, which means that by definition, they are samples. As I said in my prepared comments, I think they are going to be excellent, have already been very useful and will continue to be very useful in providing the benchmarking data for other data systems. But they are never going to take the place of good individual level data collected at the plan level.
As one of my colleagues at NCHS commented, with our large surveys we are looking at them, we are in the process of trying to make certain changes as we always do, try to keep up with changes in the health policy and the health services world, but these are large surveys with a real historical weight; they move slowly. So they are not light on their feet by definition.
So what have done, which seems to be much more appropriate, is use our funding sources, our funding streams, which are not huge, but they are reasonable to fund some of the activities that are going on in the states. I think that is where the action is at this point. I think the states are where the data are, the states are sitting on the registry data, which unlike surveys are population-based sources of individual level data. As I said in many cases, these data systems, the vital statistics registries, the death registries, the disease registries provide the outcome information that meshes very nicely with the process information that you can get out of administrative data, that you can get out of Medicaid claims data.
So we are trying to view what is going out in the states, identify good projects, try to get some funding support out to them, then as they proceed, trying to get the word out to other states and share the experiences of some of these states that are perhaps a bit out in front of the others in terms of what is out there, what can you do with what you've got, and what works.
DR. IEZZONI: Thank you, Gail, for your presentation. That was very insightful. Why don't we take 20 minutes right now before the next panel and ask if there are any questions for the first three speakers. No? Okay, Vince.
DR. MOR: Gail, the efforts in the states to actually engage in these kinds of linkage routines, is it possible -- or do any of them recognize the difficulty -- because I have no idea how to define a managed care plan in a telephone questionnaire, since patients wouldn't have a clue whether it is managed or not, since it goes on sort of behind the back. Where you have that linkage, is it possible to actually then bring in from administrative data whether or not somebody is on a managed care plan, let's say, under the Medicaid agency? Have there been cases where Medicaid has collaborated with registry data or has collaborated with some other kind of health outcomes? And could you describe some of those?
DR. JANES: Yes, there are a lot of -- if I understand -- I certainly understand the basic gist of your question. I'll answer what I think you're asking. Just in terms of these linkages that I'm talking about, the answer is, yes, they are going on. They are going on with enthusiasm out there.
We have a couple of people here on the committee who are actually sitting in states where it is happening. Kathy Coltin, there are some initiatives going on in Massachusetts, in which managed care administrative data, including data from her plan, is being linked with vital statistics data and with hospital discharge rate setting data, case mix data.
There are a number of initiatives going on in Washington State, and Elizabeth has been involved in some of those.
So yes, the linkages can be done. The primary barriers are -- there are lots of barriers, but the primary barriers are barriers of confidentiality, and I would say -- again, like I said, we have people sitting here who probably know a lot more about this than I do, because they are out there where the rubber meets the road. But my impression is that, even more than confidentiality, there are traditional and historic suspicions and discomfitures about working with non-traditional partners. I think that is more of a barrier than some of the more technical aspects of the process.
But yes, it is going on with enthusiasm. There was a project which has just wound up that came out of the Robert Wood Johnson -- we didn't even talk about foundation money, but the foundations are out there also, doing some very good work. Again, this same process of trying to identify states that are moving in this direction and just need a little support, trying to get some funding streams to help them along.
Robert Wood Johnson had a project that just wound up called the information for state health policy. They were working with states like Wisconsin, which is doing a lot of data linkage.
So yes, it is definitely going on. What we like to do is see more. As I said, I really think that in the absence of the large amounts of money that are needed to develop new data systems -- we have talked about this in the committee; these are enormous investments. I think what we really have to do is get more bang for our buck out of what we've got out there. That seems to be the direction in which things are going.
DR. IEZZONI: Eric, did you have a comment on that?
DR. GOPLERUD: Yes. One of the areas that we are very concerned about is the potential for cost shifting, not only the cost shifting between the public mental health or substance abuse and Medicaid, but also between Medicaid health and the managed care carveout into the criminal justice system, in a number of areas.
The Center for Mental Health Services and Center for Substance Abuse Treatment are jointly funding a pilot effort in three states, looking at integrating databases between Medicaid, mental health and substance abuse to among other things, be able to track what is happening as managed care comes on line and is there in fact cost shifting.
One of the areas that would be enormously helpful for us at least, I understand from some of the managed care plans and the researchers, is that the HCFA 1500 and the UV 92 have only a single diagnostic field, and do not require something which we have been told would help to explain a lot of our variance in findings. What would be helpful is to have a substance abuse yes-no question on it, as to whether there is a presence or absence of substance abuse, which would help to explain some of the variance that we are finding in managed care plans.
Now, I know that there are secondary diagnoses, a second required affirmative statement of whether substance abuse was present or absent.
DR. IEZZONI: Richard?
DR. HARDING: Last year when we had a beginning of a similar discussion, we were wondering who is overseeing the managed care carveouts or the Medicaid managed care. Who would you say is looking -- from your statements, if a company comes in and says that we are going to provide these services, who does make sure they do? What is the process by which that happens at the present time?
DR. ABRAMS: In substance abuse and mental health?
DR. HARDING: Anything that is managed, Medicaid managed care of any kind.
DR. ABRAMS: Well, there are two places. One is, the state itself has legal responsibility and does carry out oversight, and there is extensive oversight. It really varies from state to state. But that is definitely their responsibility, and they do do that.
HCFA also has a whole monitoring effort. They take that quite seriously, in terms of -- but I would say from HCFA's point of view, given the range of problems, they probably address the most critical ones and the most visible ones, not only the most visible ones, but the most critical ones, I would say. But they have a responsibility to do it, as does the state.
DR. GOPLERUD: One of the areas we are quite concerned about is that we have been watching over the last three years the devolution of responsibility and authority not only from the federal government to the states, but from the states to the counties and local jurisdictions. It scares us silly, the idea that you are going to have 3,000-plus counties doing independent contracting for managed behavioral health care or managed care.
It isn't that there aren't some counties or local jurisdictions that are extremely sophisticated, but there are an awful lot of them. They are going to be re-inventing it with an extremely small staff. The way that Medicaid is presently structured, the responsibility remains with the states, but a lot of the contract monitoring responsibility, the negotiation and monitoring responsibility will be at the county level.
So in the absence of some kind of uniform or consistent reporting requirements, contracting requirements, benefit requirements and some real sophistication in monitoring these systems, we are extremely nervous about what is going to happen to vulnerable populations.
DR. HARDING: Thank you. That is helpful, but scary.
DR. MOR: You made a point which I thought was very important at the outset, when you cynically said that lots of managed care executives might say, we've got everything. You can get split pea soup and everything in the world on our agenda of what is in the contract. It is all available, as long as you can control the gatekeeping function, as to what is medically necessary. The consumption patterns might not be up to what one might expect.
Could you just walk through -- you made some assumptions before about the epidemiological data, to say what the expected rates of consumption might be, and whether those could be used as benchmarks. I think that is a major, major assumption.
Then if you don't want to assume that, how else might you get at the issue of how to monitor whether any level of higher intensity services might be provided, because of the problem of being able to say this is not medically necessary?
DR. GOPLERUD: Yes, I think that that is an extremely important and difficult problem. In the commercial sector, it is not uncommon -- and I can only speak from the behavioral health care side -- for a carveout company to go into a company whose use of their mental health and substance abuse benefits have been in the two or three percent per year range, and increase that to anywhere between five and 10 percent of the population and save money at the same time. They do that primarily by going and reducing patient use and increasing outreach.
For Medicaid populations, it becomes more difficult to figure out what a base rate ought to be and what appropriate access should be, because it has been so often bound up by whatever is the benefit in the old fee for service.
So the historical -- going back and looking at historical patterns is really problematic. We are beginning to get information from researchers like Barbara Dickey, who are beginning to look at what is happening in Massachusetts when you manage a benefit for severely disturbed, chronically ill population, and what are utilization patterns and how do they shift, and to relate that to changes in health status and functioning status.
I think a good place to look for a survey of that literature is in Mental Health, United States, 1196 in the David Mechanic summary. But I think that we are not using our epidemiological data. The national household survey on drug abuse is going to be expanded to give you 50-state summaries of the prevalence of substance abuse and also with some mental health and substance abuse stems to give you common anxiety and depression. Those would be places to look, and you can begin to do risk adjusting.
DR. IEZZONI: Ron, did you want to comment on that?
DR. MANDERSCHEID: I think it is also important in that work that we have some benchmarks within each state of what happened prior to management of Medicaid. We do in fact have a project where we now have established some benchmarks pre-managed care, and want to go back and look at how that changes in managed care.
So if we understand the penetration rate pre-managed care, we understand the de facto per capita payment pre-managed care, then we can go back and look post managed care, how has the population changed, and have we changed the composition here in terms of diagnosis, have we changed the capitation rate, have we changed utilization. So we do have a project of that type underway.
But right now, we only have the benchmark, we don't have the follow-on data.
DR. GOPLERUD: There are also some interesting projects out there, system dynamics modelling projects, one in alcohol and Tony Brisowski's work in behavioral health, which can -- as we get more experienced, you can start adding experiential data into the modelling projects to projections based on population characteristic service availability in the plan.
DR. ABRAMS: I just wanted to add that the managed care companies have their own ways of tracking their own costs and their own services. Some of them have actually among the most sophisticated internal types of data systems. They have acquisition profiling, they track information on costs and rate of services of all their providers. In some places, they actually track it against patient utilization on the same way.
So there are some things to be gained by looking at what kind of data systems they use, what kind of data they collect, and how they are read. But the difficulty is in the changing availability of some basic input data. The states do vary in what they require and what they are willing to actually look at, because they just have their own staffing limitations and their own technological limitations.
So one side of it suggests you can look at it in terms of certain kinds of outcomes against the population that is being served. The other is that you can go in and look at the processes that get reflected if you are looking at claims and/or encounter data. Those two sources of data are what traditionally have been used.
The difficulty is that in some places, that may not be as available as it was before, although there are such things as dummy claims, where if you have a capitation system, the state does require that you submit quote dummy claims. In some places, that has turned out to be a fairly decent source of information.
But the thing is that the data systems, when you get into managed care, data systems when managed care is being implemented is totally different because the availability then becomes -- because there aren't claims being paid for a lot of things. Hospitals will just take a total capitation with all the services, and then there is just no exchange of money, just one big lump sum that goes every month.
The same thing with a group of specialists. If you're in an IPA, they just take the whole thing and they don't submit claims. So it is a different kind of challenge.
I hear different things about the Medicaid requirements. I know under 1115, they are required to submit encounter data, and it would be useful when Racial comes tomorrow to find out to what extent that is being submitted.
But the other part of it is that most of managed care is not in 1115 states; it is non-1115 states.
DR. JANES: I'd like to say yay to something Rhoda said as well. In this issue that we talk a lot about in CDC, particularly in the Office of Managed Care, and that this issue of -- we focus so much on getting the data systems; let's make sure that we interact with the states to help them with the Medicaid contracting process, make sure that the contracts contain requirements for the kind of feedback data that they need in order to be good consumers and purchasers of care, that they get the linked data system.
But as Rhoda said, in many cases, particularly some of the smaller states and some of the less information savvy states -- and I certainly don't mean that in a pejorative way at all -- there is also a real challenge to helping them know what to do with the data systems once they have them, how to use them, how to keep that circle going
I know we hear this from the states constantly, when we go out and say, we are the federal government and we're here to help. We're going to show you how to structure this great data set in the sky. After we go back to Atlanta, we hear sub rosa, what do we do with this. Or even if we tell them what we think they ought to do with it, then there are also questions of the manpower to do those sorts of things. So it is an extraordinarily complex situation.
DR. IEZZONI: Can I just ask one final question before we move on? Rhoda, I was particularly struck in your comments about how you focused on how Medicaid beneficiaries go on and off of eligibility, and the huge implications that there are from that fact for measuring how patients do and the frequency of services and so on.
Eric and Gail, I wonder if you could comment from each of your perspectives about the implications of that on-again, off-again eligibility for the kinds of measures that you all are focused on in your various organizations.
DR. GOPLERUD: I spent Thursday and Friday with Kathryn over at NCQA's behavioral measures advisory panel. One of the real frustrations there is, the rapid cycling makes it extremely difficult to collect outcome measures. Everyone says the gold standard is, does care do any good. With Medicaid, you have a real problem with keeping people on long enough to be able to say there is a relationship between what you did and what the outcome is. The financial incentives then all go toward access and process rather than outcomes. It is very problematic.
DR. IEZZONI: That is a big concern, yes.
DR. JANES: And in terms of some of the systems that we have in Atlanta and in Hyattsville, I think again you have hit the nail on the head; that is a critical problem. Your Medicaid population of today is off Medicaid tomorrow and back on again the next day.
That is one of the comments that is always made about looking at administrative data for Medicaid recipients, is that it leaves out the Medicaid population of tomorrow, that population of uninsured.
We think that particularly some of the state based surveys -- that is actually where the surveys come back in, and I think are very useful. This is where it would be particularly useful if we could -- I think the surveys as they exist, things like NRFSS, like we hope SLAITS will be, if it takes off, will be very useful in looking at states and telling us some information about the health status, access to care, some outcomes, again, give you some information on what the needs are and what is happening with certain subpopulations, so that you can plan for what you will need tomorrow.
It will be even more useful if we had the money to increase the sample size of those surveys. Those surveys are generally very useful and are primarily limited by small sample sizes you cannot resolve down to the county level, unless you as a state throw some extra money at the RFSS, something actually that Illinois has done for just that purpose, among others.
DR. ABRAMS: There are certain proxies to get at this issue. There are ambulatory care sensitive conditions that show up that people have looked at in emergency rooms, for example. There are certain proxies to look at community wide health status that relates to managed care and the lack thereof, of any kind of access. It is the same population that goes in and out. If there were some way to look at it from that point of view, you might be able to get at that issue.
If you look only at the Medicaid, I think it really creates a body of data, but it doesn't do the whole job.
DR. GOPLERUD: One other point though is that at least in the behavioral health area, we are experiencing some of the problems that Medicaid is going to get into when they get more aggressively into managed care. For a minority of the population, they have chronic needs. They are there because they are SSI eligible, and they are not turning over.
The focus there is, they are high cost people, they will be high cost next year and the year after. For those populations, I think it is extremely important to focus on functioning and outcomes, because they are going to be around and they are going to be a chronic burden.
We know eventually that the New England states are going to come in with their dual eligibles waiver, and none of us are going to know how to measure that. So the extent to which we think through chronic care performance measurement in the behavioral health side, in a sense we are doing some of the pioneering work for the rest of the health care system as it goes into long-term managed care.
DR. IEZZONI: Great, good, thank you for coming and sharing your thoughts with us. We are going to be brutal -- poor Neva has been sitting there -- and not take a break right now, because we are going to hopefully break for lunch in a little bit, an hour or so.
Neva, do you want to share some of your thoughts with us from the state perspective? We've heard a lot about what the states are going to do. Now maybe you can tell us.
DR. KAYE: Before I get into the rest of my talk, I'd like to give you a little bit of background about the National Academy, because again, I'm sure that many of you are not very familiar with who we are and what we do.
We are a nonprofit organization that was funded about 10 years with the mission to help states improve their health care policies, primarily by helping them share information among each other and among the various agencies within each state that works on health policy.
As such, what I am going to talk to you today about is information that we have gathered through a number of different projects we have done, survey work, site visits, telephone interviews about why states are collecting data, how they use it, what data they actually collect, what they have run into when they try and use it, and then just in closing, a couple of things I think they would really appreciate and that would help them to better use the data.
Some of these topics have been touched on in the earlier panel, but I will touch on them again. I also have to tell you that I realized this weekend that I was trying to cram too much information into my allotted time, so I may skim over some of this fairly quickly. But if anyone wants more information, they should feel free to call me, or maybe during the question and answer session.
First of all, why do states collect data? The information we have picked up through our work is that states really do collect data for four specific reasons. These are the ones that they mention most often: for determining payment, for helping consumers select a plan for program and plan management and for program evaluation and research.
Of course, states want the data for determining payment, not only because they are concerned about paying too much -- after all, most states did get into this with the idea that they were going to save money, but also with some concerns that they might pay too little, because they realize that if they began paying too little, it can have an impact on the care provided to the people who are enrolled in the managed care plans, either through the plan deciding to tighten access in an effort to maintain a certain level of profit, or by passing this on to subcontracted providers who in turn may have severe financial difficulties or perhaps even go out of business in some cases. So states are concerned about this on both ends.
In order to do this, their usual benchmark for determining whether payment is adequate is based pretty much on fee for service. In order to get that, they usually need different types of information for fee for service. They really focus on collecting the information about the number of eligibles, about the target population, about the cost of serving that target population on fee for service and about the units of service that have been -- the fee for service has provided.
Now, this relates to the earlier thing about reducing payments. What happens is that states try and project what that fee for service cost is going to be in the future to determine the adequacy of the rates.
As health care costs reduced their inflation in the last few years, it took awhile for the states to catch that in the way they were forecasting the rates. So in some cases, they found that indeed, they were paying too much, or that they were not getting the level of savings that they had anticipated from Medicaid managed care, so they went back to readjust those rates.
States are also entrusted using this information in consumer choice. This is an area where they are really struggling. For a number of different issues I'm going to go into later, it is difficult to come up with measures that you can give to consumers about how a plan is operating that is easy to understand, difficult to misinterpret, valid, in other words, statistically valid, and pertinent to the enrollee's needs.
For instance, if you are enrolling all of your Medicaid beneficiaries, women who have two children on AFDC are going to be very interested in different performance measures than a frail elder who has diabetes or hypertension. So it becomes very difficult to figure out how you're going to transmit this information to people to help them select a plan. But states are nonetheless still struggling with this.
Some are making some progress. Massachusetts I know was making an effort to find simple ways of conveying the most pertinent information.
The things that states cite most frequently as where they use data is program and plan management. Medicaid agencies are very concerned that they be good purchasers, that they get their money's worth, that the plans live up to what they said they would do in the contract. They have been gathering data to make sure that indeed, that is happening.
They are also, I have to admit, struggling a little in this area, from issues that I will again go into later. But the idea from a state's perspective is, they want to gather the information that is going to let them find out where the plans needs improvement, where are things going wrong. They want a way of finding out where things are going wrong quickly, before they have a chance to go very wrong. They then of course want to have ways of investigating and fixing these problems.
Most states are very much still at the stage where they are working to collect the data and to make sure that what they are collecting is indeed an accurate picture of what the plans are providing. There are some states who are beginning to go beyond that, but there aren't a lot of them.
You do have some handouts. The first chart I'd like to show you is, the National Academy does a survey every two years. We have been doing them -- we did one in 1990, one in 1994, one in 1996. We will be doing one this year.
This is the result from the 1996 survey. In this survey, it is a 24-page survey about each state's Medicaid managed care program policies and operations. We ask them everything from what services are included, the capitation rate to some questions about data reporting requirements. The N equals 38 means that as of June 30, 1996, there were 38 states who had signed contracts with risk-based managed care plans. We use signed contracts, the defining line for when they had a program, because that is when their policies are pretty much set in place.
You can see that they all seem to collect quite a bit of information. All 38 of them collect utilization. More of them, 27 of them collected HEDIS. The one thing you should know about the HEDIS number is that this does not meet all of HEDIS. It means that they are collecting at least one HEDIS measure. Even if they may have slightly changed the definition of that HEDIS measure, it is still something that they got out of HEDIS.
A lot of them collect grievances and complaints, satisfaction, disenrollment. But an interesting thing is how few of them as of '96 collected the enrollee health indicators and outcomes. This is something that you may be seeing changing in the next survey, because states will have had longer to work with managed care and longer to work to develop some of these particular indicators.
I'm going to spend a few minutes just talking about some of these different data sources. Complaints and grievances. This is something that as I said most states collect. They have found this actually to be extremely useful in finding how a plan is performing, not because it is a 100 percent sample, but because it is one of the few things that can provide an early warning when things are going astray.
For example, utilization. If you depend on utilization reporting to determine potential problems, many providers don't submit an encounter or a claim for a service provider to an HMO for up to three months, two or three months. States collect encounter information on a quarterly basis frequently. So you are talking about five to six months before you even see one encounter. By the time you then collect enough information that is going to show you any pattern, that is actually going to help you identify a problem, you are talking a year and a half, probably, or a year from when the problems started. This makes it very difficult to use a lot of information to let you know when things are going wrong.
So states have come to depend quite a bit on the complaints and grievances. They also depend a lot on medical chart reviews. They use these in three ways. One, they tend to use focused reviews, random reviews, and also to investigate complaints and grievances.
The focused and random reviews are probably the most pertinent to your discussion, because these are the types of chart reviews that are most likely to result in comparable data between plans. What states usually do then is, they develop a pre-established standard of care that sometimes is based also on input from local practitioners. Then they will pull a selection of medical records belonging to enrollees of the various health plans and measure the care documented in the record against the pre-established standards of care. That information can be analyzed to show plan performance.
Obviously, the most useful information for this group would be the focused reviews, because what happens in a focused review and a random review is, they are set to determine how a plan and our program is performing in a certain area, say, prenatal care, in which case they would go out and select a sample of records of women who had a birth during the last year.
Enrollee and provider surveys. States use these surveys not only to gauge satisfaction, but also to look at access and whether the people who are enrolled actually know how to use the system. New Jersey in particular did a lot of work trying to use these to look at access, by asking questions such as, how long has it been since you have seen a primary care provider, and knowledge of using the system such as, do you know who your primary care provider is, and do you know what to do in the case of an emergency.
Arizona has also used the enrollee surveys in order to find out why people are disenrolling from plans, which can also be very useful when you are trying to figure out what is going on in a particular plan.
The next thing is utilization reporting. I have a couple of charts about utilization reporting, because I figured this is probably what you would be most interested in.
The first one compares what the states were doing in their survey in '94 to the most current one in '96. There were 32 states in '94 who had contacts with the risk based plans, and you can see that the number of states collecting utilization information has increased. The either means they collect aggregate encounter or both, and it has increased quite a bit in the last two years, particularly the collection of aggregate data, although we didn't ask a specific question about aggregate data in the '94, so those are the seven states who identified that they collected aggregate data without us asking that question. This last time in '96 we asked, do you collect aggregate data, so that may partially explain the large increase.
But you can see that there is great interest in collecting data among the states. Then one thing that is also very interesting is that most states collect both. They don't collect encounter data or aggregate data or encounter data. In fact, there are two states that collect only aggregate data, there are 12 states that only collect encounter data, and there are 24 states that collect both.
When I say encounter data, it doesn't necessarily mean that they collect encounter data about all services, but about some services. For instance, I believe Oregon doesn't collect encounter data about dental services.
Before I move on, I do want to spend a few minutes talking about encounter data and some of the problems that the states ran into when they tried to collect it.
Very few states are actually using the encounter data yet to judge planned performance. Arizona uses it in rate setting, Oregon is going to begin using it in rate setting. Several states have reports on utilization in draft. Tennessee actually issued a report on utilization based on encounter data.
The reason that states have been so hesitant about the encounter data is, when they collect it, they run into several problems. First of all, a big one is lack of uniform coding. Now, there is the HCFA 1500 and the UV 92 and there are standard procedure codes and diagnoses codes. But what has really been lacking and has created huge problems are non-standard provider codes, provider ID codes, and recipient ID codes.
This is particularly true in states where the HMOs were in existence before they became Medicaid managed care contractors, because what would happen is, each HMO would develop its own system of provider coding, and then when they would report it to the state, they would have to translate those numbers into things that the states could recognize.
That doesn't sound particularly difficult, but it really is, because a provider may use his middle initial when he signed up to be a Medicaid provider, but not when he signed up to be a plan provider. He may have multiple practice sites, so that when you try and match the Medicaid ID number with whatever the plan is using, you just run into a whole host of problems that are very difficult to resolve.
this leads actually to the second issue on collecting encounter data, which is getting information from subcontractors. Subcontractors, when they sign for the capitation, they went with the idea that they were not going to have -- they were going to have their administrative burden reduced. So when encounter data is required, they said, now essentially we are still submitting claims, so how is the administrative burden been reduced. So they aren't always particularly cooperative, especially when you consider that the easiest way to get rid of an incorrect claim or one that you know is not going to get past somebody's edits and audits, is simply not to submit it, if you are not feeling particularly enthusiastic about making sure that the data is complete.
So what the states ran into is data that would be incorrect or data that would be missing. They have been working to resolve that, and several states have been making good progress, but it is still something that has a ways to go.
Care coordination systems. This is a new thing in some states. Particularly for special programs, many states will have requirements about a plan of care, that for all enrollees or enrollees that meet certain requirements, the plan must put together a plan of care and coordinate services.
In some states, particularly Wisconsin and Colorado, have started working with their contractors in order to develop a system that they could all use that would not only help the plan by keeping the plan of care on the system, but sending up little reminders about, oh, you said this person needed this service, but it hasn't been identified yet.
This can be a really good source of information for the types of services provided, because it not only goes into the type of services that you would see a claim from, but it goes into the types of services such as translation services, contracting such as the health status, the level of function of the individual enrollee. These states are looking to incorporate those into their care coordination databases.
The last thing I'm going to touch on very briefly is non-Medicaid data sources. Many states have started working with vital statistics to look at birth and death records, to look at immunization registries, because this gives you complete information not just about Medicaid, but about the uninsured, and it gives the ability to compare the commercial populations as well.
In particular, Wisconsin has done a lot of work in this area, Rhode Island has done a lot of work, and Tennessee. Other states are beginning to do more. This has been really very, very useful for looking at issues like prenatal care, low birth weight babies, because that information is carried on the birth certificate. When Medicaid can work with vital health agencies, you can get really good information that lets you look at things such as, did the mother smoke during the pregnancy, how many prenatal care visits were there, things like that, that you simply can't get to on a claim or through an encounter.
Then real quickly, I'm going to go through issues. The issues that come up all the time when I talk to states, when they try and use this data they have collected for the best data in the world, they are now going to figure out how are they going to use this to help for the four reasons I cited earlier. The enrollment churn, that on again, off again. For instance, for the Medicaid HEDIS, they did a study that showed that 50 percent of Medicaid enrolled for more than a year, so that mens that 50 percent of them were not continuously enrolled for more than a year.
When you start trying to look at health outcomes, or even get a statistically valid sample, this creates major problems.
The other thing is small numbers. Many plans have small numbers of Medicaid beneficiaries. This is particularly true of the ones that you might be most concerned about, the ones that are the specialty programs that serve persons with AIDS, the ones that serve frail elders. A lot of them have enrollments of less than 500 people. Many of them will probably never exceed that. To try and get statistically valid samples that can look at some of the big performance measures is a real problem when you have a population size that small.
Data validation. You can't just -- plans with the best intentions in the world don't always do everything perfectly. I mentioned the trouble with the encounter data. So one of the big issues for states when they get the data is, you have to make sure you've got the right data, and you have got complete data.
They have really been doing three things to get at that. One, for encounter data, they will run it through a similar series of edits and audits they use for fee for service clients. They will compare it against other reports they have. That is why so many states can get both the encounter data and the aggregate reports, because then they can compare between the two data sources, and it gives them some sense about whether their data is complete and accurate.
Several states have started doing medical record reviews, so they can compare the encounter data to the medical record, and make sure that all the things that were reported in encounter data are there and no more things that were reported in encounter data, although the no more doesn't seem to be as much of a problem as the not quite all.
Resource intensive. I'm not going to go into this, but I did make an overhead, since this was one of the questions you specifically wanted to know. There was one project I worked on specifically to dual eligibles, where we were going to try and find out how data collection was working on those programs. I was trying to get at the resource question for programs having dual eligibles, so you do have to use this information cautiously. But this is the information that I gathered.
You can by glancing through it see that indeed, it takes a lot of resources, even for small programs. These are many of those specialty programs that have very small enrollments.
Then quickly, in conclusion, you can see the state have really been working to get information and to figure out how to use it. But there are a couple of things that would be very helpful. One is standardization of provider numbers and all the other things that still need standardization.
But other areas where they could really use some help is figuring out how to get around these issues that I raised when they try to use the data. How can you use -- given that you are always going to have small numbers in some programs, so you're always going to have the enrollment churn. How can you still use this information to tell you something about plan performance and to help people select the plan?
Then finally, benchmarks. If there is some way of figuring out what is good performance, so that they would know how to use the information they see to actually figure out where is the area that they need to focus most of their program management resources to create the most improvement. I'm sure they would find that very helpful.
Thank you. Sorry I ran over.
DR. IEZZONI: Thank you, Neva. Why don't we just move quickly to Grace, so we'll have as much time as possible for questions?
DR. GORENFLO: I do have a handout. It looks like that is going around.
I should start off by saying that a lot of what I'm going to say is not going to be new this morning. I'm glad I had a chance to hear other people talk, so I could at least warn you about that.
What I'll do is start by talking a little bit about local health departments, so you can appreciate the context from which I'm speaking. Then my comments will involve mostly around the first question about what questions we would like to have answered, and I have enumerated those questions on the handout for you, but I'll go into these in a little greater depth.
Then finally, I'll talk for just a moment about the data that we collect at NACCHO. Beginning with local health departments, our challenges in association is fairly great. If you've seen one, you've seen one. There are 2,888 across the nation.
We are different from community health centers. I discuss that, because a lot of people use those terms interchangeably and it can get a bit confusing. But community health centers were established to provide comprehensive primary care to indigent underserved populations.
Health departments for the most part were established to provide population-based services to protect and promote the health of everyone in a jurisdiction. Over time, they backed into providing primary care services to varying degrees.
Again, with about 3,000 health departments across the country, it is difficult to characterize or to really summarize what one health department would look like, but I'll tell you that a few years ago, we extrapolated on some figures we had, and found that approximately 40 million people each year receive some personal health service from a local health department.
Now, this is not comprehensive care, but some kind of personal health service. Most health departments provide immunizations, and about three-quarters provide well child care, prenatal care. They offer WIC services, HIV testing and STD testing and treatment, just to give you an idea about that.
Nationwide, seven percent of health department budgets come from Medicaid, which is fairly small, though that number does increase the larger the health department is.
Regardless of the package of services provided by health departments, all of them are charged with assuring that everybody in their jurisdiction has access to quality medical care. They can assure that either by directly providing services or by assuring that systems are in place within the jurisdiction, and that people are getting in to get care.
Now, what we have been seeing with Medicaid managed care for the past couple of years anecdotally is that health departments like health centers, some of them, are getting out of the business of providing care to Medicaid patients is more and more become enrolled in managed care arrangements.
Again, regardless of whether or not the health departments are actually providing the care, they still have a vested interest in whether or not care is being received in other entities.
Now, to move on to the questions we would like to have answered, you will see that most of these revolve around cost, quality and access, terms that I'm sure are not unfamiliar. Again, I'll be moving into some areas that have already been covered.
The first is whether or not managed care organizations are doing a good job of educating patients about the system. I think we have all run into this. A lot of people in the room probably have switched at some point from an indemnity plan to a managed care plan. You know you're coming from a well-educated position, where you're not worried about things like food and housing, and it is very confusing and difficult for people who aren't operating from such a great base. It is hard to change patterns and also to make sure that they fully understand -- I have trouble figuring out how mine works. So this is not unique to me, I know.
A lot of patients continue to show up at local health departments to receive their care. In 38 states, local health departments are the provider of last resort. Regardless of who walks through the door, they must serve them.
But even in states where this isn't the case, if the Medicaid patient shows up, even if the health department is no longer responsible for serving that patient, there is a real ethical dilemma. This can be the only opportunity that for example a mother is going to provide her kid to get an immunization for a variety of reasons. So do yo really want to let the kid walk out without immunizing him? Probably not. But that starts a cycle of things, because the health department goes further, and their resources are dwindling, financially they are not able to do everything that they can do.
So we are concerned that the patients who are enrolled, are enrolled and educated adequately, and we would like to assess the degree that that is happening to the extent possible. This is a bigger question even when patients are auto enrolled, and don't have the opportunity to choose.
The next, quality and health outcomes, a term that our members use a lot which is probably not unique to them, managed costs, and the concern that MCO's, just because of the way they're set up, they are businesses. We are concerned about the balance between being able to cut costs and for them to be financially viable, coupled with how that plays out in terms of them looking at quality and health outcomes.
This is not to bash managed care. The managed care organizations vary widely. But it is an important thing to pay attention to.
There are a lot of questions these days about who is responsible for quality. I like Dr. Harding's question about accountability. That is something that we have struggled with, and we think that there might be a role for local health departments in that.
Actually, Elizabeth worked on a project with us to help try to define that. But most importantly, the question that we want to have answered is, is somebody watching quality, and how is that being assessed.
Are sufficient providers available? When managed care organizations are developed, they enter often into areas, geographical areas, where they haven't been serving people before. So there is the question of whether or not there are enough providers who are willing to accept the lower reimbursement rates. Not only that, are the providers culturally competent to help the patients overcome some of the barriers that are put up when you come from a different ethnic or cultural background?
Are Medicaid patients actually getting their care? A big, big access question. I know it has already been covered, but just to emphasize this again, there area lot of barriers encountered by Medicaid patients to getting care that the mainstream population doesn't encounter. Health departments, health centers and other entities often provide outreach services. They don't wait for the patients to come through the door; they reach out and make sure that people are brought into the system.
Now, traditionally, managed care organizations haven't had to function like that; that really hasn't been their business. But now one of the things we are concerned with is the access question: is the population getting served, are the people getting in to get the care that they need?
Not only that, are Medicaid patients also receiving non-medical health care? This refers to wraparound services, services that address nutrition, substance abuse, mental health, housing? We know that when Medicaid patients come to a health department, for example, there are good referral systems in place, and those needs are assessed, and appropriate referrals are made. Is this happening in a Medicaid managed care organization?
Also, with regards to preventive care, is that being given? We would argue that if managed care is taking on the Medicaid population the way they are, they really need to adopt a population-based approach, and in terms of the broad population make sure that all of the patients are getting immunizations, mammograms, Pap smears, blood pressure screens, that asthma is under control, et cetera. That is something we would like to see assessed as well.
Something that Neva just touched on also is, how is the patient turnover handled? When the patients come in and out and they do so rapidly, are they enrolled in a timely fashion? Do they know again which provider to go to, and how is this being addressed in the various states?
Along these lines, what happens to their medical records? It is obviously important for any provider to have a good patient history, to duplicate unnecessary services and also to make sure that they have the information they need to treat the patients the way they need to treat them. We would like to see more statewide immunization registries and that kind of thing to help get at this issue.
Next, are reportable diseases reported in a timely fashion? This really isn't a question that is just specific to managed care providers, because the answer for any provider is, no, they are not. That is just a big problem that we face in public health. But we really think that the proliferation of managed care offers an opportunity to look at a batch of providers and try to assess whether or not they are meeting public health requirements and put in incentives to do that more and better.
Then finally, our last concern revolves around the uninsured. As local health departments lose their Medicaid revenue, they are not able to provide and to cover care for uninsured patients. This is not exclusive to health departments, either, but they lose their ability to staff clinics.
They are part of the safety net, the safety net is crumbling, so what is happening to the uninsured? If you are looking at Medicaid managed care, I think it is really important to also keep an eye out for the uninsured and see whether or not their health status is deteriorating, which is what we anticipate, given the current environment.
I'll move on briefly just to talk about the data that we have at NICCHO. We do not collect any beneficial data. We primarily look at health department structure and function. Because there is a dearth of data that is available on local health departments, we always use primary data collection methods.
We found overall an excellent response rate. We put out a national profile of local health departments every couple of years. This year, our response rate is 88 percent for our first round, which is pretty significant.
For the first time this year, we did ask some questions about Medicaid managed care, and we finished the data collection. We are in the process of analyzing it. I think we'll probably have some figures within the next few months. This is one of those projects where every three months they say, yes, and then another three months; that is where we are now.
But what we have asked health departments is to tell us whether or not they provide or purchase services for both Medicaid patients and non-Medicaid patients in the context of managed care, and whether they use formal or informal agreements. So if this is information that would be useful to the subcommittee as your work continues, we would be happy to share those data with you once they do become available.
There is a lot more we would like to know. I was happy we had any questions in this questionnaire about managed care. But obviously, we would really like to take a better look, a more objective look, at what the trends are for local health departments. We have been hearing a lot of things anecdotally, but we don't have a lot of objective data at this point.
Of course, we are also interested in the effect of managed care on the Medicaid population and the uninsured population. But also on the general population, because not only do health departments have clinics that also serve the uninsured, they provide a lot of invisible services that protect the health of the entire community, things like assuring a safe water supply and a safe food supply. Unless you lived in Milwaukee, if you were there during the cryptosporidium outbreak, you probably appreciated the need for clean water. If you were in the Pacific Northwest when they had the E.coli outbreak, and the Jack In the Box in the hamburgers a few years ago, you probably more appreciate the need for a safe food supply.
We are just wondering and watching, anticipating that as health departments serve fewer and fewer Medicaid patients, their ability to provide some of these other population-based services would also be compromised. So we are trying to get a handle on that. The only thing that is holding us back from looking at this more closely, of course, is money. We are looking for funding and hoping to find some funds so we can take a better look at this and get a handle on what kind of trend we are seeing.
I guess that is really all I have to say. I'll be happy to answer any questions. If there are any other questions you would like us specifically to pose to local health departments, we can talk about that as well.
DR. IEZZONI: Thank you. 2,888? If you've seen one, you've seen one, huh? Gosh, okay.
DR. GORENFLO: Yes, never a dull moment.
DR. IEZZONI: Do any of the committee members have any questions for either Neva or Grace? Vince?
DR. MOR: Maybe you can both comment on this, because you both ended up with this notion of what the scope of responsibilities are of the state health departments on the one hand and the county units or the city units on the other.
I had this notion, Neva, as you were describing the issue of timeliness, that encounter data are never sufficiently rapid to be able to avert the problems that might occur, if they are already underway, because it is probably nine months to a year before the best state has real ability to look at a trend and a new, quote-unquote, data point on that trend, because of the passage of time.
States have multiple responsibilities here. In some sense, they are the prudent purchaser, because they are now buying instead of just regulating, but they are also a regulator, and most state constitutions have some responsibility of the state to the health and welfare of the population, and that has evolved out of the counties.
Under all three of those sets of responsibility, there is the need to know what is happening in a relatively rapid way in order to avert certain kinds of crises. I think about this from the point of view of state banking crises or E.coli crises or those kinds of things. If you have something reportable, that will come up quickly.
But the more gradual things of a provider folding or a strike or something like that, that may have some slow effect on the population's health, the state then has a real responsibility for knowing enough about those plans to make sure that the people it is paying for, if they get bumped don't have some kind of negative effect.
So it really is pretty important to have information on the viability, financial and service, on a more rapid basis than merely the encounter data. What other kinds of information do your people have, like a banking regulator, to know whether or not the agency is going to be solvent sufficiently long enough to be able to provide the services? Solvent is just one example.
DR. KAYE: That's a tough question. The best things the states have reported are the complaints and grievances. You are not going to find from those that yes, indeed there is a problem. But it could be the first indication, and you could then bring other resources to bear to take a closer look at something.
They do get financial information, but again, there is usually a lag time. So that really isn't the same kind of quick turnaround that they would be looking for. There really isn't a lot. The approach that a lot of the Medicaid agencies have taken to the furor of a provider folding is really more to make sure that if it happens, when they find out about it, they have found a way to protect the continuity of care of the individuals enrolled with that provider or with that plan, rather than to try and find out about it sooner.
For instance, there will be requirements in the contract about how the -- say that the provider is folding, have decided they have had enough of Medicaid managed care, they are losing money and my God, they are going to get out of it. There will be requirements in the contract about the HMO having to notify the state when this is happening, when their subcontract is ending, so many days before the subcontract is ending, and requirements about provisions for those people that are receiving services from that individual provider. In other words, you have to figure out some sort of arrangement to keep that continuity of care going. You have to figure out a way of getting people moved to a new provider or plan and notified and educated about that within a set amount of time, and you have to put together a whole big plan of action and submit the whole thing to the state before that subcontract lapses.
But they really haven't found a good way of getting at things pre-emptively, that you are talking about. It has been more to make sure that they can react appropriately.
DR. MOR: Have there been any experiences to your knowledge of this? I know in Medicaid in Florida, there were some pretty unfortunate experiences.
DR. KAYE: Yes, there have been. That is why you will see fairly standard in contract language requirements about what happens when you cancel a subcontract for whatever reason.
I myself, previous to going to work for the National Academy, I worked in Wisconsin's Medicaid program for 10 years. I was there when an HMO went out of business. We very quickly put everybody back on fee for service until we could get them into a new plan. That took us a matter of days after we knew for sure they were going out of business, to get them on fee for service. But it is much more reactive rather than proactive in that area.
DR. IEZZONI: Grace, did you want to comment?
DR. GORENFLO: Yes. That is an excellent question, and it is something at the local level we really struggle with, especially because local health departments unfortunately don't have a lot of teeth with their managed care organization.
In a couple of states, they have tried to work with the state health department to get certain things enacted, that would give them more of a police authority, for lack of a better term at the local level, so they can get the data they need to see whether or not some of these questions could be answered, but they have been unsuccessful so far.
Another piece that is kind of interesting is that some health departments are really in a position where that would present a conflict of interest anyway, if they were somehow checking up on the managed care organizations in their district, because a lot of them are scrambling to get subcontracts or otherwise work with the managed care organizations. So it is an interesting trend to look at.
DR. HARDING: Will you repeat that?
DR. KAYE: The second piece?
DR. HARDING: Just right there at the end. Who is looking to -- ?
DR. KAYE: The local health departments are trying to work with managed care organizations to subcontract -- to get a subcontract from the managed care organizations. So if indeed they end up in that kind of a relationship, they are hardly in a position to regulate or govern.
DR. IEZZONI: What kind of services would they be --
DR. KAYE: A lot of what we have seen is home health services, especially for well child care and some prenatal care, post natal care, that kind of thing.
DR. AMARO: Are addiction services also included?
DR. KAYE: We do very, very little in terms of -- at the association level in terms of mental health and substance abuse. The majority of health departments, I don't think provide those services. There probably are examples, but it is not one that jumps to mind as much as something like home care, and outreach for immunizations is another.
DR. IEZZONI: Marjorie, you are looking like you have a question.
DR. GREENBERG: I wanted some clarification from NEVA about the -- according to your survey, all 38 states were collecting encounter data?
DR. KAYE: Utilization data, either aggregate, encounter, or both.
DR. GREENBERG: I think 36 of them of the 38 were actually collecting encounter data?
DR. KAYE: Yes.
DR. GREENBERG: So did you get any greater detail on whether it was a subset of the patients, or --
DR. KAYE: I have that, but not with me. We did collect information about whether they get -- the division was between some services and all services. And actually, there was a fairly even divide, if I remember correctly, between those that get some or all.
States will, particularly when they are starting, focus in on particular essential services that they figure are most important when they start collecting encounter data, because that lets them concentrate their resources. For instance, as I mentioned, Oregon might not collect dental, not that it is not important ultimately, but they want -- you have to find out about primary care and physician services and that perhaps more immediately.
DR. GREENBERG: I guess I was a little surprised by this, because if there are 38 states that have risk contracts, essentially all of them, all but two, are getting encounter data. Yet, we hear as Rhoda Abrams mentioned before, that the problem once patients get into managed care often is that there is not encounter data, that even the 1115 waivers offer encounter data, but it is not actually being collected, or the states aren't actually getting it. So there seems to be some kind of a disconnect between this result and the other. I'm just trying to figure out what is missing here.
DR. KAYE: The disconnect is the completeness and the validity of the data. You are collecting encounter data -- a state can be collecting encounter data for years, but it may be missing things that it expects to get. For instance, it may not be getting every single well child screen that was provided, because one particular sub-capitated provider has not quite figured out the systems and how to communicate to the plan, or even really how they should code these things to make it okay for the plan in the state.
Then you also have the completeness, because as I said, once you get to the issue of, you're not quite sure how to code something, you know if you submit it to the plan and the plan submits it to the state, it is going to fail to set it. I have run into that on occasion. So the states are collecting encounter data, but they are not yet at the point where they are using encounter data, most of them. Arizona seems to be quite well along there, Tennessee is actually quite well along in collecting encounter data that they can actually use. But most of the other states are still at the point where they are trying to make sure that their data is accurate and complete.
DR. GREENBERG: Thanks.
DR. IEZZONI: We heard from Dr. Janes during her presentation that at the state level, the behavioral risk factor survey and so on is used widely. Have either of you from your perspective seen the use of these big federal surveys or federal, state surveys for looking at Medicaid managed care?
DR. KAYE: No. What I have seen is more the link with the vital statistics for birth, death certificates, and also some of the surveys that the state -- actually, I believe there was one use of the -- what was that household survey? I forget. I don't think of them in terms of acronyms, I use my own set of acronyms. No, not health and nutrition. It was a state-based survey. They would call up the households and survey them about, did you see a doctor recently, when was the last time your kids -- what would that be?
PARTICIPANT: (Comments off mike.)
DR. KAYE: In that case, I have heard of that being used.
DR. IEZZONI: For looking at managed care issues?
DR. KAYE: Yes. As a matter of fact, it was very interesting, because that was extremely useful. With Medicaid and vital statistics working together in public health, you could develop a report that would show you not just how was th Medicaid managed care program performing in the plans, but also how are the uninsured faring under these same things, how are the commercially insured populations faring on these same measures, and it can give you really good comparisons and some very good information. Those are excellent sources of information.
DR. JANES: A lot of the projects that I was talking about, the BRFSS projects, are CDC proof of principle projects. This is fairly new, I'd say a year, two years old, and really have been initiated by CDC with state partnership; it is a package deal. But we are in the process now of trying to get the word out to the states. So I think it is still at that level.
DR. IEZZONI: Please go to a microphone, thank you, and introduce yourself again.
DR. BREWER: Sure. I'm Bob Brewer, CDC in Nebraska. I just wanted to comment on the behavioral risk factor survey.
A couple of things About that. First of all, there was a morbidity-mortality weekly report article that was published recently, based on use of behavioral risk factor survey data in Michigan, that actually did a comparison of the Medicaid population to commercial, to the state as a whole.
That was actually done in collaboration with NCQA. In our state, we are also looking to do a pilot project in the Medicaid population in the Omaha area, and hope to get behavioral risk factor information that we can compare against the state.
The other final comment I wanted to make has to do with member satisfaction surveys. There is a lot of interest now in how to incorporate more behavioral risk factor information in the member satisfaction surveys. NCQA has been working very closely with CDC, continues to.
So I think that there is a real interest in collecting that information as part of the member satisfaction process, and then be able to compare it to the state information.
DR. IEZZONI: Thank you, that is very helpful. Hortensia?
DR. AMARO: One of the things that I am wondering about is, the federal government cuts back on certain programs, like I am most familiar with the substance abuse treatment programs, the demonstration projects and so forth, that have provided services that have generally not been covered through Medicaid or other managed care or other kinds of providers. I'm thinking about residential treatment or even out-patient treatment for substance abuse treatment, after you reached the X number of visits that you're allotted.
Usually, programs at the local level that were funded by the state or the local health department or the federal government have filled in. I'm just wondering whether there is anybody attempting -- so there's a whole set of services out there that are being provided, not through the managed care provider, but by the state, local, city department or by federally funded projects that are locally based. Yet, this is an important source of care for certain groups, especially the group we are most interested in. I am wondering if there is any attempt that you know of to try to capture that, because as that goes down or up, we should be seeing this population affected.
I am just wondering. I am seeing this whole piece that is not being measured, if we only look at measures through the provider.
DR. KAYE: The only place where I have seen people starting to get at that a little bit, although not those specific topics you suggested, were in the care coordination databases. There, the Medicaid agency and the plan are trying to track all the care that an individual receives, because a lot of these are very, very ill, they have got multiple health problems, so they will use other services such as the ones you're talking about, or where I have seen it is something provided by the area agency, the Meals on Wheels, that the database would track that those kinds of things were being provided to the individual enrolled in the managed care plan, and to make sure that if it was identified as something the individual needed in the plan of care, that somebody was responsible for following up and making sure that that was used.
But that is the only place where I have really seen starting to get at that type of information.
DR. GORENFLO: We will have -- once we analyze the data from the current profile and also start the next round of collection, we will have an idea of the trend, to see whether or not clinics are closing. We won't have the information to answer your question directly, because we don't ask it in that manner. But what we do ask health departments to do is identify those services which they provide, so they are not tied to a particular clinic.
But this is also going to be the third profile that we have done. The first one had data from 1990. So we will have some basis of comparison between '90 and -- '94 and '95 are the years of the data that are being collected now. So again, it won't be a clean answer, but it will give us a gross idea of the trend.
DR. IEZZONI: Any other comments from the committee? Are there any questions or comments from the audience? Yes.
DR. MELMAN: I'm Mike Melman from HRSA. I just wanted to make a point, and Rhoda didn't have a chance because of time limits to talk about the community health center survey. One of the assumptions in most of the federal surveys is that they are household population-based surveys, in which it is then difficult to find out what kind of delivery system people are in, not only managed care or not managed care, but what type of managed care.
What we did though is start from the other direction, which was to do a probability sample of delivery system in the community health center survey, using the population-based surveys, the health interview survey and the national ambulatory medical care survey, using those same questions and selecting a sample of grantees and then a sample of sites within those grantees.
So we know something about the structure of those places, but we also have a comparison sample, because we have the national HIS and AMSES. So I think it would be worthwhile for the committee in the long run to revisit this premise, that the best way to collect data about the relationship between the inputs, the structures and the processes and the characteristics of the delivery system and outcomes is on a population sample.
I actually really think though that there is some utility to going back and forth between those two modes, and having them complement each other. So I just want to lay that on the table. Then ultimately, I think the desired goal would be to link that to administrative data systems. So we link that up to the encounter.
The advantage of surveying people, clients, is that you can figure out they do -- you can ask questions about why they behave the way they did. But you need to know something about the structure of delivery systems, so you know what can be the targets of your public policy.
DR. IEZZONI: One final quick question before lunch, and then we'll break. The National Committee advises Secretary Shalala. Are there any particular pieces of advice that we could give to the Secretary that would particularly help you as you are thinking about improving the information available about Medicaid managed care?
DR. KAYE: I actually pretty much said that at the end of my talk. The big thing is standardization, particularly provider numbers, and then help with trying to figure out, because there is the time lag, because there is the enrollment churn, because of all the issues I described, how you can use this information to not only tell you what is going on in the plans, but to help people select a plan. It is very difficult for an individual state to grapple with those things. It is, how can you make this information you are collecting really useful to people. I'm sure they would like some help to figure out some way of moving that forward. It would be greatly appreciated.
DR. IEZZONI: Thank you.
DR. GORENFLO: I think our biggest issue is probably around health outcomes. There is just so much going on right now. There is just not that much in the news about the health status of the uninsured and the health status of the Medicaid population. So I think that is what we would really like her to focus on and have on the radar screen.
DR. IEZZONI: To kind of highlight how these folks are doing in terms of their health and well-being.
DR. GORENFLO: Yes.
DR. IEZZONI: Okay, thank you. Why don't we take a break for lunch, and we'll reconvene at 1:30.
(The meeting adjourned for lunch at 12:36 p.m., to reconvene at 1:30 p.m.)
DR. IEZZONI: Welcome Back. We're going to get started. Tomorrow the subcommittee is going to start at 9 o'clock. George van Amberg has been kind enough to bring to our attention an issue about population standards for age adjustment statistics. We are going to ask George to chair a little session tomorrow morning about that, with Harry Rosenberg from the National Center for Health Statistics. Then we'll also hear about the possible effects of the new OMB standards on race and ethnicity classifications for vital and health statistics. Then e'll reconvene at 10 o'clock for the Medicaid managed care activity. So I hope the subcommittee members and whoever else wants to come can join us for that at 9 o'clock tomorrow morning.
I welcome the folks who are on the panel for the afternoon. Why don't we just go ahead and have you each introduce yourselves, since I don't think all of you were here for the morning introductions? Then we'll get started.
DR. GRIST: My name is Bob Grist. I'm the director of the Center on Disability and Health, here in Washington, D.C.
DR. IEZZONI: Good, thank you.
DR. GREENFIELD: I'm Lee Greenfield. I'm a member of the Minnesota House of Representatives, where I chair the health and human services finance division.
DR. IEZZONI: Thank you for coming.
DR. FISH-PARCHAM: I'm Cheryl Fish-Parcham. I'm associate director of health policy of Families USA.
DR. CLARK: I'm Nancy Clark. I'm managed care coordinator at the Oregon Health Division.
DR. BREWER: I'm Bob Brewer. I'm with CDC and I'm serving as the state chronic disease epidemiologist in Nebraska.
DR. IEZZONI: Good, great, a really widely diverse panel. I think we'll learn a lot this afternoon. Bob, do you want to start us off?
DR. GRIST: Thank you very much. I'd like to take a slightly different tack than people started with this morning, since the agenda seemed to be fairly broad. I thought it was appropriate for me to share with you some of the dilemmas that I'm facing as a health policy researcher and advocate, and possibly get some suggestions as to how to proceed.
First of all, Medicaid is an important program for people with disabilities, but it is not where most people with disabilities are. In fact, only 10 percent of children with disabilities are likely to be in Medicaid, and the others are likely to be in commercial insurance or uninsured.
The percentage of working age people with disabilities in Medicaid or Medicare is probably one third. That means two thirds are not in Medicare or Medicaid; they are mostly likely to be in commercial insurance or uninsured. In fact, the percentage uninsured is lower for people with disabilities than for the general population because of the existence of public programs like Medicaid and Medicare.
So how managed care deals with people with disabilities is critical to the total disabled population, and I suggest is actually an important litmus test for the quality of health care that the temporarily able-bodied population is also going to experience when they develop chronic health conditions.
In other words, instead of saying how can the state efficiently ration Medicaid resources in ways that improve quality at the same time that you are trying to limit the resources, I'm thinking we need to examine the experience of the Medicaid population and more importantly, use the Medicaid program as a vehicle for introducing public accountability into the larger health care system. I think that is really the challenge that we face, far more important than the rationing challenge of how to squeeze the Medicaid program in ways that will save states money.
With that in mind, I have had an opportunity to look at a Medicaid managed care pilot in a Midwestern state, and then a couple of commercial plans in that same Midwestern state. I have not actually shared with them the final report yet, and all this was done anonymously in order to protect their confidentiality. So I would like to not divulge information that really would pertain to a particular company or a particular state.
But I did get to look at an HMO, a staff group model HMO, and an IPA, not typical ones, ones that were willing to answer 46 pages of very probing questions, and ones that were willing to not only give me official positions, but let me look at the decision making process in the plan as it is carried out by at least 10 different staff positions from the CEO and financial officer, all the way down to the PCP and the specialists that they contract with, and the utilization review, grievance and appeals people, enrollment, anyway, lots of people who I thought had critical roles to play in the way managed care plans function.
What I was amazed at is how little disability is on the radar screen of the commercial managed care plans. Mind you, the capitation payment that they collect is far lower than what they collect from Medicare. They collect easily $400 in this particular state for Medicare capitation, and more like $100 to $125 for capitated payments for the commercial plan.
So it is not like they are really interested in being more generous with the disabled population. So I was interested, how do they make decisions, what kind of quality assurance mechanisms exist in state policy that would hold them accountable, and what kind of recommendations could I make to the states for public accountability and managed care?
The reason I am raising these issues in this particular forum is because I think it is very important that we look at the Medicaid program as a model in many ways. EPSDT, for example, a very interesting benefit that we still have in federal policy, not adequately implemented, not adequately implemented by most states. But the way an effective EPSDT benefit could be provided to low income children, or really all children, where medical necessity is the driver, not medical necessity when it is cost effective for the plan.
That to me is a very important principle, not only in the Medicaid program, but for the commercial population as well. The reason this is critical now is because states have these children's health insurance programs to experiment with, and to design them after benchmark plans in the commercial sector, where they don't have the quality assurance standards or the comprehensive benefit package that you often have in Medicaid.
So I think it is very important that we look at the Medicaid experience not only in terms of the states' desire and the feds' desire to contain costs, but also in terms of its implications for quality assurance, because I think by linking the issues of quality assurance and coverage of the uninsured, many of the problems also that were identified in terms of the total population, the uninsured population here, linking that to the quality issues in the commercial plans, you start developing a constituency that is willing to publicly fight for an improvement in generic health policy. We are not isolating this particular program for low income people and expecting politicians to make the right decision, because it is not politically necessary for them to do that.
I was amazed that there are no needs assessment processes in plans at the point of enrollment, which would alert the plan to the health care needs of enrollees. So the plan would see whether they actually have the capacity to provide the care that they claim to provide.
DR. IEZZONI: Was that in all the plans that you looked at, Bob, or just the private or just the -- because you were looking at three plans in the state.
DR. GRIST: Yes. The commercial plans also serve Medicaid and Medicare populations. For those populations, there was a self administered screener, something that you could complete in five minutes in a doctor's office, and that was extremely useful for the Medicaid and Medicare populations. They actually use that data, although not as effectively as I thought they could have, but in terms of getting it to the PCP in time for them to use it.
But none of this exists for the commercial enrollees. Yet, that information is so important for the main reason that I want to talk here right now, which is to measure changes in health status.
One of the things that we know about disability is that there aren't practice guidelines for most low prevalence conditions. I know HEDIS was able to find some for the major conditions that they focused on, but for most disabilities, there are not practice guidelines.
So that affects what you can use for quality assurance purposes. If you don't know which services are absolutely essential, and if many people with disabilities have comorbidities anyway, which would complicate the application of practice guidelines, one of the ways that we could measure quality is by simply looking at changes in health status from the point of enrollment to subsequent periods in the individual's experience in the plan.
I think this is a very important measure of quality, not because I think that there is a perfect correlation between process and outcome measures. Everybody says that it is much more embarrassing than that. We can't really tell often what the outcomes are attributable to the services the plan provides.
But what is even more interesting is that many of the non-clinical interventions are the ones that are most likely to improve health status, and the plans do not have an incentive to provide those non-clinical interventions now. They are much more likely to say, well, that's not in the benefit package, therefore it is not going to be provided. If you go for an appeals, the appeals issue gets settled on the coverage grounds, not on the medical necessity grounds.
So I think that there real reason for creating incentives for plans to be concerned with improving health status, even though that may not be directly attributable to the interventions of the plan or to the services that are covered.
Now, I recognize from health services literature that you need to risk adjust the -- and I know this is your specialty -- risk adjust the outcome measures in order to be able to measure quality. But I am suggesting that if we could somehow get benchmarks -- and I see benchmarks are hard to come about in the general population. If we could get benchmarks for people with disabilities for specific conditions, that would enable us to see to what extent the observed outcomes in the plan are different than expected outcomes. We would be able to have the plans report to the state in a way that would give them credit or deficit for the experience in serving people with particular conditions.
I think that the experience of changes in health status for all people with disabilities can be aggregated. We often talk about the small numbers problem, which is insurmountable, and all the reliability problems associated with that, in terms of the kind of measures that we see in HEDIS.
But if we could aggregate the experience of plans improving health status for people with specific conditions in the plan, and then link those outcome measures with payment policy, so that plans are rewarded for improvements in health status and penalized through withholds for the opposite, I think that we could begin to create an incentive for all plans to start improving health status, and that would be an important quality assurance position.
Now, I know risk adjustment in Medicaid is easy to do, at least if you have the money it is easy to do. But in the commercial world, the state doesn't pay plans and the employers don't risk adjust, either.
So what I am suggesting is that perhaps there ought to be a way of having plans collect a certain percentage of money from all the plans licensed in the state, maybe five or 10 percent, based on the enrollees that they have, and that this part of money would be used for the purposes of rewarding improvements in health status or the opposite.
Now, the second point I really want to make is that these issues of access to quality care are civil rights issues. These are not just issues about what is desirable or what -- the plans provide what people choose. If people choose the wrong plan, that is their own fault.
I know how the marketplace operates. But the question is, can we create a civil rights context for the cost containment process? I think the Americans with Disabilities Act actually provides an opportunity to do that, because when a plan enrolls people, they have an obligation to provide equal quality care for all enrollees.
Now, I know the plans get to develop their own administrative mechanisms for the managed care process. Not only do they choose which services they are going to sell, and what the limits are going to be in the benefit package, although that is often blamed on the employer who gets to choose really what ultimately the premiums are going to be, but they also get to choose the medical necessity definition, as was stated earlier, the utilization review criteria, the economic profiling criteria for whether the providers will be retained.
My feeling is that we need to start looking closely at the administrative structures within the plan, not what the consumer knows about them, not what the employer knows about them, not even what the regulator is not looking at when they are regulating the managed care plans, but start looking at whether there are particular administrative mechanisms which are inherently discriminatory.
HCFA has begun to do this in the way they have developed some standards for financial incentives in Medicare managed care plans. I won't describe them right now, but I think we need to start developing those kinds of mechanisms so that we can start applying non-discriminatory criteria standards to the administrative structures in these plans.
I think this is the way we can begin to really benefit from the Medicaid experience with managed care around the needs of people with disability.
Thank you.
DR. IEZZONI: Great, Bob. I love provocative statements. That was very interesting. Why don't we pause quickly, clarification questions only at this point. We are going to want to come back and talk to Bob more about his ideas in a second. Are there any questions of clarification? No? Great. Lee?
DR. GREENFIELD: Thank you, and thank you for the opportunity today.
Let me just give you a little bit more about me, so you know who is testifying in front of you. I am beginning my 20th year in the Minnesota House. Since 1987, I have chaired the finance division for Health and Human Services, which funds the Department of Health and Human Services, including the Medicaid budget.
I was also one of the seven Minnesota legislators who developed what is now called Minnesota Care. We had fancier names then, but somebody else had already reserved them all, any ones we could think of, in Minnesota's health care reform and also our insurance program for the working poor.
Let me talk a little about what Minnesota historically has done to put into context my answers. We began using prepaid managed care plans for our Medicaid population when the original waivers were available, so for more than a decade, we have been using prepaid managed care. Very rapidly, after attempting the broad populations, we got down to just doing families with children, until at this point we only have families with children in the prepaid plans. Until the last two years, they were all in the metropolitan area, Twin Cities, Minneapolis and St. Paul, which has a very heavy penetration of HMOs. Over 60 percent of all health care is provided through our HMOs in that area, and we are using those same HMOs for our prepaid plan. We only use regulated insurance plans for our Medicaid managed care plans.
The results. There is something fairly unique about those. Our HMOs all by law are nonprofits, and all therefore are Minnesota based companies, which makes it very easy to deal with in terms of the legislature. We regulate them, we license them, and we use them, and they are close by.
Which does lead me to a question that many states have asked me about in the past, and that is cost saving by going to Medicaid managed care. We did experience in the metropolitan area, the urban area of the state a cost saving for using managed care, about five percent families with children.
As we have expanded in rural areas, that has not worked that way, even though everybody thought it was going to, including the department of humans services that manages the programs and the plans. They gave us bids based on fee for service costs minus five percent.
They have all come back for more money. It is an interesting question. In rural areas of Minnesota and I suspect in almost rural areas throughout the United States, getting providers to accept Medicaid rates is not the easiest thing in the world, although technically, probably all providers in the Medicaid program, what they do or don't do with clients who attempt to access services is another question.
What has happened as we have expanded into the rural areas is, the amount of service being used is going up fairly significantly, which I think is a good thing if they are needed, and since they are managed, I am assuming these are needed services. But it raises a very significant question about whether there is any cost saving involved. There may be more access to services, which is great, but it does raise a serious question, even in this relatively easy population to care for, families with children. There is a serious question whether there is any cost saving in the future in rural areas, if not increased cost, but also increased access.
Noting that, we'll get into some other -- and I'll try to base these on Minnesota experience and try to answer your questions, and I'll probably get off on a few and not get through. But I'll try to hit the important ones.
In terms of the questions that I have and my members of the Minnesota House particularly have about the use of Medicaid and managed care, the first one is the obvious financial one of, how do these health care results compare to what we used to get in fee for service for the clients.
The more interesting one however is, how do the medical results we are getting compare to what other populations who are covered by health insurance get. There is in the data available, at least in Minnesota, no easy answer to that one at all. We hope to have it.
In doing Minnesota care, we gave the authority to collect health care data, encounter data and cost data, and any other data the department of health wanted for all health care in Minnesota. We have since created a public-private partnership between the various major players in health care in the department of health in the Minnesota Data Institute, to gather such data and provide information where necessary. That is slowly developing, but we have done some very good surveys on satisfaction rates, which are very interesting in and of themselves.
All this and what has happened has raised the significant question to us of what happens when you move into populations with disabilities. We did a little bit at the very beginning and gave it up. There didn't seem to be cost savings.
We are now allowing counties to develop pilot projects to serve populations with disabilities. We have two being set up; we will probably eventually do one or two more. We want to find out what the data tells us, and monitor it very carefully to be sure we are providing services, adequate services. I'm not convinced at all that there is any savings, but maybe better coordination of services.
The kind of data we need, encounter data, satisfaction surveys, outcome measures. The outcome measures, particularly for those of us with legislative responsibilities, in order to really compare plans, we need more than very sophisticated bases of encounter data which some people can interpret or perhaps do research on. The world would be much more comfortable if we had the kind of data we could go back to constituents with and say, look, the outcomes are good. They compare in a certain way.
The only way we're going to get that is to collect outcome data, and do it in a systematic way. I also happen to think that is also very important for -- I hate to use the term, though I have been -- what loosely we call managed care, because it has become a negative term in most political lexicons. It is under constant attack in all our media. But in fact, if we are going to defend the move to prepaid health plans, HMOs and other versions that people come up with, and if they are good -- and personally I have been a member of one since 1965 for my own health care, and I'm quite satisfied with it, and have other options, but am quite satisfied with it -- I believe it is necessary to get outcome data to show us what is happening, and show us in such a way that more than just experts can be comfortable with the results, and that we have something to go back to our constituents with.
Minnesota for our Medicaid managed care collects HCFA's MIC data; the data was in that proposal. We collect complaints throughout the system, both internally within the HMOs and to the department of health, which is in the department of human services; they both have complaint mechanisms for these populations.
The data we collect does include diagnosis, not necessarily very complete. We are attempting to develop risk adjustment for the Medicaid population and need diagnosis and other data in order to do it. We hope to implement that in the next couple of years.
We have done satisfaction surveys. Two years ago we did it for all health plans, separating out those that are in public health plans. Interestingly enough, our managed care plans for the general population did better in terms of satisfaction than our indemnity health plans, and our public health plans did better than our regular, in terms of satisfaction rates particularly the Minnesota care program, which is a program for working poor people, up to 275 percent of poverty. It received exceptionally high ratings by the people being served perhaps, because they realized the bargain they had, and for the first time had insurance available that was affordable.
We are now this year going back and doing satisfaction surveys just in Medicaid, because we're not doing what everybody else is doing, the statewide program. We are doing it again to see what kind of results we have.
How does it differ from what we used to collect in the past? Whatever we collected in the past, the only thing that was really available in public policy, was what we paid out, how much we paid out, very broad categories, the hospital or whatever. It was not useful for any public purpose other than tracking finances.
We were promised at various times we could turn around and get data from that, and nobody ever achieved it. We are attempting to do that with the data we collect on managed care, however, and that seems to be coming along. That is the emphasis there, which raises the whole question in terms of saving money again. You do not get to lay off a large number of people from your state agency. You probably hire more expensive people, because you now have to track data and determine what it means, rather than just pay bills and send out checks. So you have different kind of functioning people, at least as many, and they are probably more expensive. You can probably retrain some of the folks. But it does not save you money in the administration if you're going to do it right.
I mentioned that we're going to risk adjustment, and we will need functional analysis also in doing that, and the need to achieve outcome measures.
In terms of data collected and the source of the data, I indicated we gave the department of health all encounter data and cost data in 1990. They are slowly phasing that in, beginning with the largest providers going down into smaller providers, have not got down into individual practitioners yet, but I'm getting there, requiring that data to be available, the same way we are collecting the data for Medicaid managed care. Where it is not yet required to be collected by the department of health, the department of humans services, which has the Medicaid program, requires they get it directly.
One of the other ways of not making that a very costly problem is to require it as part of doing business in health care in general. Then it gets built into the entire system and spreads in the entire system. If you leave it just for Medicaid, it is going to appear to be very expensive to collect this kind of data, I would think.
We hired organizations to do quality of care studies this year. We are focusing on three HEDIS 3.0 effectiveness of care measures: childhood immunizations, prenatal care and breast cancer screening.
I should point out that when we did the reform, about a year later we required that the insurers in the service and in the department, the last people by the way to agree with the Medicaid folks, come out with uniform forms and definitions. We do do that now in Minnesota, though we had to drag our good friends in administration along. They were the most reluctant to give up any change in form, but we did, and we do have those. That makes a lot of this much easier to accomplish.
Which gets me to perhaps one of the sticking points which you raise in your questions. That is confidentiality. The Minnesota statute that requires all encounter data and cost data to be collected and any other condition specific data the department of health asks for. It requires that when that data is collected by any state agency or any agent for the state, that the individuals be identified by an encrypted number, encrypted based on the social security, unless the individual objects; we might provide an alternative. That hasn't been decided; it depends on its cost. But that encrypted number be such that it cannot be broken by anybody.
If that provision was not part of our statute, I would have lost that provision and the right to collect data a couple of years ago. This has been attacked a number of times, only because we continue to claim to be able to provide privacy are we able to hold that. I believe that to be true for every state in the Union. If folks believe that their health data, what services they use can be found out by anybody, even theoretically, they are going to in large numbers object to their legislature about collecting it. An example of that came up in immunization registries this year. The department of health innocently came in to get funded in a number of communities that wanted to encourage other communities. My colleagues in the Senate put a provision in that it was voluntary on the part of the parents. Wonderful, useless. They could not understand why they would be useless registries, but that's beside the point. In order to block it, I had to remove the funding in the House. Happily, the people that wanted to do it couldn't figure out where to put their amendment, so the House didn't have a position, so those who can fund their own registries locally in their local health departments can go ahead.
But that concern about data being collected, which anyone might have an access, and what we have asked the department of health to do is come back with regulations about who can have access under what conditions, for what purposes, which I'm sure is what they intended to do in the first place. However, they didn't spell it out, and the legislature was not about to allow that to be collected that way, unless people gave their permission.
I think we would find the same issue come up with any health care matters in the general public. This is very vital. I know there have been proposals to allow police agencies access, so they could get down to individuals, and I don't think we'll have any data collected. I think that would be a crime.
I believe, looking at Minnesota's example and a number of other states, that managed care of one sort or another will dominate American health, as it already is in Minnesota. Everybody will be getting their care through some version of managed care in the future. The only way to make managed care work is to be able to monitor it for quality and access. You can't do that without the data. So it becomes vital to this whole direction we're going in and what is going on, both for Medicaid and for the rest of the population.
Just in terms of advice, it goes back to the same thing. We must have a system that allows us to collect data on all health care, in particular all managed health care, public and private, so that we can monitor it for access, quality and outcomes, if we are going to have a health care system that is based on quality and provision of needed care.
Thank you.
DR. IEZZONI: Thank you. Any quick questions of clarification? Yes, George.
DR. VAN AMBURG: How much is Minnesota spending on its data collection or anticipates to spend on its data collection and analysis efforts?
DR. GREENFIELD: Right now, a good deal of those costs are built into the contracts at all levels, so it is real hard to pull out amounts, because a good deal of the payment is being done by the plans.
DR. VAN AMBURG: I'm more interested at the state.
DR. GREENFIELD: At the data institute level, it is not huge amounts. It is on the order of probably a little over a million dollars a year, which compared to what we're spending on health care is pretty minor stuff.
DR. VAN AMBURG: Right.
DR. GREENFIELD: Our Medicaid costs, public costs, I should say, for our Medicaid population and Minnesota Care and the other populations that the state covers is about $2.9 billion a year, in comparison.
DR. IEZZONI: Jim, quick question.
DR. SCANLON: (Comments off mike.) Does the confidentiality allow some sort of a safe haven, so that the commercial plans or others who don't want to reveal proprietary information, is there some way that you can --
DR. GREENFIELD: We do not make public. The data collected is private until all the patients' names are out. Even then, it is not public data. It is -- let's see if I can remember the status of it, what it is called -- it is data the commissioner controls, and allows for valid research purposes, as defined by the department of health and universities, et cetera. It is not generally available. But it is available however to buyers from benefit managers of our companies and the like, so they can use it. But it is not generally available. Reports on it are made available to the public, but not the data itself.
DR. IEZZONI: I'm sure this issue will come back in the discussion section. Thank you.
DR. GREENFIELD: Pleasure.
DR. IEZZONI: Cheryl, we just passed out your handout, and we're all prepped for you.
DR. FISH-PARCHAM: Just to give you an introduction to what Families USA does, we provide technical assistance to consumer organizations in a number of states on Medicaid managed care and other health care issues.
What I want to especially talk about today is a pilot project that we have been doing for the last year in Washington, D.C. with a group of Head Start parents and a group of community organizations to monitor Medicaid managed care in the District.
It is a very quick and dirty and crude research project that we undertook for two purposes. We wanted to get some quick data that would help policy makers in the District make decisions as to how managed care could best work here in the District, as they went from a voluntary to a mandatory system. We very much wanted to have consumers educated and involved in the policy debates.
So to that end, we did a survey with Head Start parents. What we did was, we recruited 12 parents who would form a survey team. They then talked about their problems that they had already experienced in Medicaid managed care, and using a combination of that and existing survey instruments that we took from a number of states, we came up with a consumer survey that met their needs.
They then went out and surveyed their peers face to face as to the problems that they had faced in the last year under Medicaid managed care. They surveyed a total of 100 households, and these are the results that they came up with.
I should also say that although I'm focusing on the District and talking about some problems that are specific to the District here, they also are problems that are similar in many, many states. It is just the variations, the nitty-gritty of why they occur that is different from state to state.
First of all, we found that the discontinuity of care is a very big problem in managed care. This is too bad, given that creating better continuous access to primary care is one of the goals of managed care. Of the families that we surveyed, 46 percent had a doctor before enrolling in managed care, and of the total population, 37 percent had to change doctors upon enrolling in managed care. This was in a system where Medicaid managed care enrollment was voluntary in the District at the time of our survey, but if people didn't select a plan within 15 days, there were defaults assigned to a plan. In the District's case, it has one of the worst default assignment rates in the country, or it did until recently. Eighty percent of people offered the choice did not make a selection within 15 days, and were default assigned to managed care plans.
Another thing that was bad about the District's system was that when people recertified for Medicaid, they had to again tell the District what managed care plan they were choosing, and if they didn't make a choice within that same time frame, again they were default assigned. So we ended up with households who switched plans during the recertification process.
The second problem was with regard to access to care. One-fifth of the people we surveyed said that they waited more than an hour to see their doctor at their last appointment. This was both for the parent and for the child's last visit. Interestingly, the District has an access standard that says that on average, the average Medicaid beneficiary should wait no more than an hour to be seen. Well, the average Medicaid beneficiary does meet that standard, but that doesn't prevent one-fifth from waiting more than an hour to be seen.
One quarter of families reported a problem with access for a family member in the lat year, and appointments for urgent sick care and getting return calls in an emergency were some of the biggest problems that were reported.
The District provided in its capitation rates -- the plans were charged with providing most services to a TANIF related population. But the District managed care plans were not supposed to provide mental health and substance abuse. Rather, people had the right to go to the public health system for those services, or to any provider that would accept Medicaid on a fee for service basis. But there was really little knowledge of how this was supposed to work. In fact, half the households that we surveyed thought that they needed referrals from primary care physicians to get mental health or substance abuse treatment. Similarly, few knew that they could go to any provider for family planning services.
We collected some very serious problems in the mental health/substance abuse arena, where people nearly lost detoxification beds as they waited for provide referrals that were not necessary.
Then finally, we found that less than half of the respondents had any idea how to handle problems with managed care plans. Few knew how to file complaints, few knew how to file requests for fair hearings. More knew how to dis-involve but still, a third of the people surveyed didn't know how to dis-involve from plans.
Now, since we reported those findings, the District has made every effort to correct the problems. But I just want to give you a very quick flavor of problems that were reported to us last month, that we showed that the problems haven't all been fixed.
One is, with regard to access to mental health/substance abuse, the District really did step up educational efforts about how to get those services. But the concrete problem that remained for beneficiaries is that they now don't have Medicaid cards that identify their Medicaid numbers; they have managed care plans that give their managed care numbers instead. When they try to go to a fee for service provider, they can't be seen, because the fee for service provider doesn't easily know that this is indeed a Medicaid recipient that they are seeing.
A second problem in the last month was with regard to dis-involvements. The District did a much better job of educating people about their involvement options, but we had three reported instances where parents learned after the fact that their children had dis-enrolled from managed care plans, and when they called to find out why, the enrollment contractor for the District said, oh, those three children requested dis-enrollments. Well, the three children were ages one, four and 11, not a very likely prospect. Although we are hoping that we can have very savvy consumers, we don't think it reaches the childhood years.
Now, the third problem that we have come across in the District this past month is that the hospitals are routinely requiring patients to sign financial responsibility statements when they go into a hospital for treatment. This is not just for Medicaid, but for everyone. But the hospitals don't say, oh, you're on Medicaid, you don't have to sign this.
So we find that when people are in a dispute with their managed care plan, the managed care plan doesn't authorize the service, but Medicaid will still pay for it. The beneficiary is still being left with the bill, and in some cases really serious liability problems and judgments against them for not paying those bills.
Now that I've given you a flavor of the nitty-gritty, I want to tell you about what state data states are collecting, and what we see as some of the weaknesses in this. This partly comes from a study that we gathered the publicly available data in Maryland, Virginia and the District on Medicaid managed care. We don't have everything yet, and we are only beginning to analyze it, but here are some preliminary results.
First of all, consumer satisfaction surveys often fail because they are left to the plans to administer. The plans don't have a consistent methodology for administering consumer satisfaction surveys, nor do they have consistent questions to ask, nor do they have good benchmarks.
We for example found in our survey that I reported earlier that one out of four beneficiaries that we surveyed had an access problem within the last year. Well, a plan could turn that around and say 75 percent of households were satisfied with access. Well, which is the way to report it, and should you be concerned about that figure?
Second, voluntary dis-enrollment rates are not consistently collected by states. These seem to be a fairly good indicator that consumers are dissatisfied with their plans, but the states don't always separate voluntary dis-enrollment from other sorts of dis-enrollment, like losses of Medicaid eligibility.
This is draft information, but I'll just share it with you. This is dis-enrollment rates across some of the D.C. Medicaid managed care plans. If you look on only a monthly basis, the differences in percentages are so small that you can't really tell anything.
If you look at yearly rates, you will see some fairly startling differences. The dis-enrollments ranged from eight percent in one plan to 38 percent in another. So it does seem to be a useful indicator.
Another weakness is that states all use different ways of defining auto assignments. So if you want to look at auto assignments state by state as an indicator of how well states are educating beneficiaries, states will immediately scream that you're comparing apples to oranges.
States are contracting for external quality reviews, but the scope of those reviews vary, and we do think that this can be again a useful measure. It is just that we would like to see more consistency in what is obtained.
Just to again show you an example of what these show, these are again District plans. We looked at in a medical records review what were some of the top 10 problems that were reported on medical records. Again, there were no benchmarks given as to what to expect. But you will see some things, like medications lists not being maintained well, physical exams not being given, things that seem to really raise areas of concern.
But the District does not do any focused studies. Here is an example from Maryland. I think this is in the handout as well. Maryland did do some focus studies. You can look at hypertension, immunizations, other things that were not available in the District.
In D.C., although focused reviews aren't part of the EQRO scope of work, recently the District asked plans to each conduct focused studies. Consumers really worked with the District to get them to look at problems that were real health care problems of consumers, hypertension, diabetes, HIV, as well as the childhood measures that most plans are looking at already.
But we were not able to achieve -- even there, the District is not requiring the same focused reviews by each plan, so the plans that are accredited by the NCQA don't have to meet the same standards.
Across states, there are difficulties interpreting complaint data, as on the previous slide that I showed you, since people don't know how to complain. The lack of complaints doesn't indicate the lack of problems, and states really haven't come up with good ways of aggregating this and looking for patterns. It seems that the states that really look at lots of individual complaints get much better information.
Access measures are inconsistent, despite contracts that set ratios of primary care interventions to patients, for example. States haven't consistently collected that data or analyzed it in a consistent way. As Neva Kaye mentioned earlier this morning, there really are no benchmarks to help states figure out when they have a good result and when they have a bad result.
So with that, I'd like to kind of leave you with my recommendations for questions. We would like to know about continuity of care, both during initial enrollment of Medicaid populations and ongoing. We would like to know whether the general population has good continuity of care, and we would like to know about populations with disabilities, where there have been really serious interruptions in treatment documented in many states as people enrolled in managed care plans.
Second, we would like to know about typical voluntary dis-enrollment rates under states that do monthly, six month and annual dis-enrollment. We would like to see standard instruments for assessing consumer satisfaction with additional questions specific to state programs. I think that a lot of the meaning of our work with Head Start had to do with not only just collecting the results, but also in probing for the reasons that the problems existed, and I would really urge others to do that as well.
We would like to know about treatment outcomes for health problems typical among TANIF related beneficiaries. At least in the District, these are the most common problems that we see. I expect that they vary somewhat in other states, but states seem to have a pretty good record of looking at least at some pregnancy and immunization records. They don't look so well at measures for low income women, and we think that is a tremendous gap. Hypertension, diabetes, such things should be looked at.
Even when they do look at children's screens like EPSDT, often they only look at whether screens as a whole have been done, and not where the gaps are in screening, not whether it is hearing, vision, dental, whether all of those things have been done.
We would like to see best practice monitoring instruments for chronically ill people. We have had a lot of talk about behavioral health throughout the day. We are particularly interested in things that would help states make decisions about how to handle behavior, how should it be as a separate carveout, a specialized managed care plan, should it be a fee for service option or should it be part of a general managed care plan. I suspect that all of that varies tremendously, depending on what the public mental health system is in this state, and we would really like to see the data broken down that way.
Then finally, do any aggregations of complaint data meaningfully reveal patterns, and in the expansion states, when does cost sharing reduce participation and access.
Families USA has a number of guides, some published and some forthcoming, on various Medicaid managed care issues which take consumer experiences in a variety of states and try to portray what seem to be best state practices. So we have a guide out on marketing enrollment and one on cost sharing, which you are welcome to; it is useful. Shortly forthcoming we have one on complaint grievances and hearings, meetings the needs of the chronically ill and access to providers.
Thanks.
DR. IEZZONI: Great. This has been a great panel, really informative. Do subcommittee members have any questions or comments? George? Vince?
DR. MORE: I have a question for Robert Grist. To your knowledge, has anyone deigned to use the civil rights and ADA argument as an equal access under the law, in terms of health care?
DR. GRIST: Sure, definitely, but not on equal quality or equal access to the covered services. As a quality issue, the ADA has not been used. As an access issue, it has been used.
First, in the Oregon Medicaid waiver program, and both Clinton and Bush recognized its applicability there, and frankly, the disability advocates there are pleased that that awareness was created and used effectively in raising issues. There has been a lot of sensitivity to those kinds of issues in Oregon.
DR. MORE: But the quality issue is steeped in terms of the same definitions of quality, even though there are different types of services about which quality might be compared? What is pertinent for somebody with a disability might be quite different than what is pertinent for a mammogram for a 45, 50 year old woman.
DR. IEZZONI: Women with disabilities need mammograms, too. And they need Pap smears, even though a lot of doctors don't know how to get them up on the table.
DR. GRIST: Vince, HCFA has used ADA issues in challenging Medicaid waivers.
DR. MORE: For New York?
DR. GRIST: For New York, and around the capacity of plans to provide AIDS treatments, as you know. So I think there is some reason for thinking this is a relevant issue. The question is, how do we expand its use? Frankly, the Office for Civil Rights has expressed some interest in these kinds of issues.
DR. IEZZONI: I think the problem is going to be how you measure quality. What a brilliant statement I just made.
DR. GRIST: But when it gets to the legal domain, it actually doesn't matter anymore. It is not a matter of measuring, right?
DR. IEZZONI: I really think that, Bob, I would probably take exception to your statement that you could aggregate together all of the experience of people with a full range of what would qualify as disabilities, because I think that when you talk about people who are disabled due to dyslexia or some other cognitive situation, that it might be very different than somebody who is disabled due to Parkinson's disease, due to a neurologic defect.
So I'm not sure that the small N problem is going to disappear. Have you looked at that? Are different managed care programs, maybe carveouts or in-services for people with disabilities, specializing in particular types of disabilities, so they would have larger samples for a more homogeneous population?
DR. GRIST: For getting a benchmark, I think that is the appropriate way to go. You don't want to hold the plan accountable for a quality measure that isn't obtainable anyplace.
DR. IEZZONI: Right.
DR. GRIST: But if it is attainable in a specialized program someplace, I think all plans, public and private, ought to be held accountable to that standard, on the grounds that you are providing medically necessary services to a low risk person, so you have that same obligation for a higher risk person, even if it is not cost effective for the plan. They have developed all these mechanisms to make sure that decisions that are not cost effective don't get made in the consumer's interest.
I think that is ultimately the accountability issue that we have to settle in law. But it seems to me ADA is a very useful tool for two reasons. One is that it is a civil rights law, and two, people with disabilities are more vulnerable to the cost containment pressures, for many reasons, in managed care plans.
So as a litmus test for whether civil rights are being violated by the types of administrative structures used, I think it is a strategy that should be pursued at this point.
DR. IEZZONI: George, did I see you heading towards the microphone?
DR. VAN AMBURG: You said that the state health agency had the authority to collect data from all health care providers.
DR. GRIST: Yes.
DR. VAN AMBURG: What is the mandate on the providers to provide that information, and what are the penalties? And has there been a problem with this?
DR. GRIST: There has been no problem to date. The mandate is part of licensing.
DR. MORE: Mr. Greenfield, the Minnesota law encryption structures, does that mean that record linkage is still possible?
DR. GREENFIELD: Because the encrypted number is based on the social security number, and the encryption should be the same, yes. It is one of the things we want to do, is to track who is receiving services from where. If they get it from more than their health plan, we would like to know that, in looking at outcomes.
There is an exception possible, which would create a complication. There are folks who object to having their social security number used. If we accommodate that, and it all depends on what the cost will be, that would create a problem in that area for those people. I'm not sure we will accommodate that.
DR. IEZZONI: Yes, Hortensia?
DR. AMARO: I was wondering if in your state, the state health department collects the information on people who utilize services that aren't provided say through the managed care company, but through a state service that is funded through federal dollars, through the block grant dollars.
DR. GREENFIELD: We would be collecting all of that. In the encryption, if it is based on social security number, we would be able to correlate more than we do now.
DR. AMARO: So you would be able to get a complete picture of the services used by someone through managed care or through other publicly funded --
DR. GREENFIELD: A fee for service, like. A place at the end of the providers that we don't license, which people are now interested in, are euphemistically called alternative care. We don't necessarily collect data for most of those. Some we do, some that are licensed.
DR. IEZZONI: Richard?
DR. HARDING: Two things. One is, I predict you will have much more -- if you are collecting all of that data, you'll have much more alternative care, because that is kind of an underground --
DR. GREENFIELD: Absolutely.
DR. HARDING: That wasn't my question. You brought up something interesting. You were saying that the debate goes on, whether managed care becomes a cost saver or not.
DR. GREENFIELD: Yes.
DR. HARDING: Is that one of the reasons, or is it to improve the quality, or both?
DR. GREENFIELD: Well, the original emphasis probably was on cost saving and predictability of state costs, which managed care achieves because you are pre-paying it on an annual basis, and you will eventually build a track record that will give you a good indication, even if you don't know all the forces that caused the changes.
There were those interested in cost savings. I think they may be a lot less than people thought about, and right now I'm even wondering whether -- when we finish, whether they will exist. But we will get better quality. It is costing us more in rural areas for even families with children, which are fairly predictable, easy clients to deal with, with a lot of experience. But they are having much better access to health care, which I think should lead to an improvement.
The worst example in Minnesota is dentistry. Outside of the metropolitan area, the ability of an AFDC family to get dental care is almost negligible, unless they were previously employed and had a dentist who is willing to keep them now that they are on Medicaid, which occasionally occurs.
The contracts provide dental care, and they have an absolute guarantee that it will be available. It would be helpful for the children.
DR. HARDING: But the question I had was, you had said that you put out contracts for the fee for service of the last year minus five percent. I'm a not for profit organization and I pick that up, I have 95 percent of last year's payments. But I don't have an infrastructure like you had. So you get rid of your infrastructure and I'll hire it to take care of my -- and then you rehire a group of higher expense people, so we are doubling the infrastructure cost, won't we?
DR. GREENFIELD: In Minnesota we are using HMOs and Blue Cross/Blue Shield, part of which is an HMO, part of which is not, as our provider in these managed care contracts. They have a large enough infrastructure
not to need additions for the populations we're talking about increasing, in terms of paying bills.
We however need -- instead of people processing bills and sending out checks, a staff that can go over data and determine whether quality and appropriate services ar being delivered, and all kinds of other things.
At the very least, we're not going to save any money, it may cost us money to mange these contracts appropriately. But I don't think it is a large increase in cost for the size managed care plans that we are contracting with. Most of them are around 700,000 people enrolled already. Adding another 50,000 does not usually up their costs very greatly in terms of paying their provides. Some of them do pay on a fee for service basis, some of them are empaneled HMOs in the rural areas. They tend to all be contracts on some version of a fee for service or a capitated plan to primary providers.
DR. IEZZONI: I was struck by all three presentations how you are getting data directly from beneficiaries or clients, or however you choose to describe that, Bob. In your case, you wanted to ask people about their functional performance. You want to ask people about their satisfaction.
How do you deal with the issues of linguistic minorities?
DR. GREENFIELD: The plans are required to have people who can handle linguistics in their provision of services.
DR. IEZZONI: But for the surveys?
DR. GREENFIELD: For the surveys, we have to do the same thing. We have to find people. Calling somebody in a language they don't understand is a waste of everybody's time.
DR. IEZZONI: That's right. It may not be as complicated in Minnesota, though, as it might be in California.
DR. GREENFIELD: We have a large number of Southeast Asians, as our churches have gone out of the way to attract them, and somehow they have put up with our climate, which is utterly amazing, but we do have a large number of Southeast Asians and other immigrant groups because of active work by churches.
DR. IEZZONI: I'm just wondering also whether the notion of what makes somebody feel that they are functioning at -- how much study there needs to be of the various constructs that are evaluated through these patient or client responses, the surveys. There are various ethnic groups that tend to have very important attitudes about whether they are satisfied, or whether they feel pain and so on.
I just wonder how you might think about thinking about that as you go about evaluating Medicaid managed care.
DR. FISH-PARCHAM: In our case, we weren't able to survey all the linguistic groups in the District. We did have one Hispanic surveyor who surveyed within the Hispanic community.
We used a combination of types of questions. Some of them were, how long did you have to wait for your last appointment, multiple choice options. Some of them were narrative, did you experience a problem in the last year, what was it, and then followed by some specific questions about, did you go to the emergency room and still receive a bill, things of that sort.
I think it is important to use a combination. We did face to face interviews. Interestingly, and this wasn't something I could have predicted, the Head Start parents told us that in many cases when they surveyed people who did have a problem, they did not want to disclose it to their peer, even with confidentiality. They instead would flag that there was a problem and ask that the staff call back with the problem information, which we did.
DR. GRIST: My work did not involve consumer surveys at all. I was looking at how plans make decisions about health care. I think a needs assessment process is a critical one, and frankly, that needs to deal with the accessibility issues, including linguistic or cognitive accessibility.
DR. IEZZONI: But Bob, your proposal of evaluating performance based on patients' self records of their functioning certainly will ultimately require patients to record their functioning. There may be differences across different subpopulations.
DR. GRIST: Yes. I think the focus ought to be on the outcomes that are most meaningful to the consumer, and not necessarily just the medical criteria that are often used in acute care.
So I agree, this issue that you're raising is one that would have to be addressed. But in my view, the plan actually does the before and after or the changes in health status over time, and the consumer gets to sign off on the assessment, and is empowered through that process.
DR. IEZZONI: Are there any other questions? Anybody from the audience? Any of the panelists for each other? What other permutations can I think of? I kind of see the need for caffeine in a lot of peoples' faces. So what I think I'm going to do is take the prerogative of the Chair and break right now.
The three panelists from 1:30 are welcome to stay. However, if you have planes to catch and so on, we really thank you for your interesting and important and informative contributions. Otherwise, why don't we return at 3:05 to start the final panel for the day?
(Brief recess.)
DR. IEZZONI: It is a little bit before when I said we were going to start, but I notice almost everybody here, having looked at the audience. So why don't we get started, because that way we can finish earlier?
Thank you, for the 1:30 panelists who are leaving. Nancy from Oregon.
DR. CLARK: Thank you. It is a pleasure to be here. I will start as the other panelists by saying who I am and what our setting is.
I have spent seven years working in a managed care organization and another 11 in public health. My colleagues in managed care think I've gone to work for a bureaucracy of imbeciles, who do nothing but get in the way of good patient care. My friends in public health think I am an apologist from the evil empire. So the fact that nobody likes me is why I was qualified to come and be here.
PARTICIPANT: Welcome to the group of evil --
DR. CLARK: In Oregon in 1993, we passed the Oregon health plan. That was a series of policy and legislative reforms that included the famous list for which we were vilified in the press at the time, for rationing care. It also includes a pool for small employers and high risk individuals, the employer mandate, which was not enacted, and the important piece and the one of relevance today is the conversion to managed care for 85 percent of our Medicaid population. That was phased in starting with the AFDC population, and our new eligibles, which was everyone up to 100 percent of the poverty level. Then we rolled in blind, disabled and chronically ill in 1996, and then we added behavioral health and dental health, and also raised the income eligibility.
So it has been a series of reforms that have rolled in more and more people with access to the minimum benefit package. We have several agencies involved. I want to explain this, just so you will know what perspective I'm bringing to this.
We have the Office of Policy and Research, which reports directly to the governor, and they are responsible for the overall policy direction of the health reform. Then within the department of human resources, which is a huge umbrella agency, we have the Medicaid administrative office, which does all the day to day administration. Then we have these other agencies, which includes me. I am with the public health department, also the mental health department, alcohol and drugs, senior, disabled, we all collaborate with our Medicaid agencies. So I'm not directly with Medicaid, but participate in a lot of the planning.
There is another agency that I will also mention, because some of the earlier panelists alluded to this issue, which is our Insurance Commission. It is very heavily involved with the commercial managed care. We find it is very important to keep Medicaid and commercial and Medicare all somewhat in sync, so that we don't create administrative nightmares. So we do also work with our insurance division a lot.
We have 15 plans, nearly statewide coverage. I think one of the things I need to mention about Oregon is that we do seem to have unprecedented collaboration. We meet monthly with all the medical directors, quarterly with QI managers, monthly with contractors. If we have a meeting they will come, so that helps.
So now I will try to address some of your questions. I am going to assume that you are as interested in our failures as our successes, so if I drag out all of our dirty laundry, I hope you will be gentle.
The questions that we want to address are related -- I'm talking particularly from the public health standpoint, but we have an array of questions that have to do with simply, do people in our state have access to medical care, especially disadvantaged populations. In that we include Medicaid, but also rural populations working for certainly diverse ethnic groups, even though they do have insurance.
When we actually operationalized that question, we asked the question a little bit differently than I seem to see in some of your material that I was reading in preparation for this. We asked, are they getting the care they need, particularly from a prevention standpoint, and are they getting the care that is comparable to commercially insured managed care. We don't really have fee for service anymore as a comparison group; it is not representative of anything parallel, so that is not really our measure.
Another question that we tried to address, or a series of questions from the glass half full standpoint, is, how can we use our managed care organizations as a lever to improve the provision of preventive care to all Oregonians. That is, how can we help our managed care organizations learn to do population-based practice and truly manage care for quality intervention, rather than simply for cost. And from the negative side, how do we keep the shift to managed care, which has greatly altered the income stream for a lot of our safety net clinics, how do we keep this from ripping a giant hole in our safety net for people in Oregon.
There is a fourth series of questions that are just now emerging, and I can barely articulate them. I'm going to try to present them to granting agencies. They look at me, so I'm not doing a very good job of explaining it. Maybe you can help. But we are having to deal with the issue of, where is the variation right now in practice. Is it really at the managed care level, or is it more at the provider network level? As we work with our providers and try to get change down at the level of the interaction between the provider and the patient, where is the best place to implement data collection, so that you are really building systems that will improve care.
So those are the questions that we're working on.
Quickly I will talk about the data that we collect, both now and in the future, and how we do that and what our gaps are.
The first type of data is just addressing our main purpose, which was to increase the availability of insurance. So we do population telephone surveys routinely to see who is insured and how they are getting that insurance.
I would mention that the recent cuts from CDC regarding BRFS are very distressing to us, because we use the BRFS extensively in tracking this. We have been fairly successful. The uninsured rate in Oregon is down below 11 percent now, less than eight percent for children. We will continue to collect this for quite awhile. The new areas that we are trying to get into are attempts to understand the underinsured, which would be surveys of the managed care plans around copays for specific services. It is much harder to get this, and we are clueless about our self-insuring companies. We don't have any data from them about what they are doing.
The second type of data we collect would go in the category of satisfaction surveys, which is interesting. This is a new area. I guess we didn't care if people were satisfied before. We collect these annually through the mail, stratified at the managed care level, but they are administered singly through the state. There are different ones for adults, children and what we call exceptional needs individuals, and also separate for dental and mental.
The exciting thing is that we are now moving toward collaborative one time administrative surveying for all of our Medicaid plans and most of our commercial plans, all at the same time so that we'll have comparable data for everyone in Oregon. We are also moving toward the CAPS. We will be using the CAPS survey this year.
As the field has evolved and we have learned what patients think is useful information and what we the experts thought was useful information don't really overlap very well, and we think CAPS has done a better job than some of the other surveys that are out here in terms of reporting. They do a better job of getting the data back to people in ways that they can actually use it to choose plans. So we are jumping on that bandwagon.
We also have -- as the speakers said, we have very high response rates, particularly for the people who had no insurance before. They are quite satisfied now that they have something. Whether you report that as an 85 percent satisfied or a 15 percent less than satisfied is enviously a political issue.
The third category of data is eligibility and enrollment, which is old data but with a new purpose. We have a lot of debates in our state about whether there really is this turnover, enrolling, dis-enrolling, whether it is between plans. We tried to take the point of view that we're going to use that to design interventions. We know that it is not uniform, that some populations roll a lot, churn a lot, and others don't. So if we can understand which ones those are, what their health problems are, then we can begin to do some things around data for continuity of care, if we can figure out how to do that without making the lawyers too nervous.
The fourth category would be the health status survey. We used the SF-36, administered annually, both as snapshots and then as a cohort study over time. We dutifully report the indexes, and frankly, we are kind of scratching our heads as to what we do next with that data. It seems very rich, but I'm looking for the answer as to exactly what we should do with it. We're not going to give it up, but it is definitely new, and we are looking for ways to use that usefully.
I won't go into provider network adequacy, financial stability of plans. These are not particularly hot issues in our state, but we do have to collect a lot of that. We also do the external quality review contracts with a pro, and I'm going to talk about those a little later. Mostly we do chart audits to validate our encounter data, and we also do focus studies.
I would also put on the table extensive qualitative data that we collect as a part of site reviews. The Medicaid agency goes to each of the plans every year, looks intensely at their QI processes, their exceptional needs care coordination process for people with special needs.
Increasingly, that process is nearing NCQA's accreditation, at least the process, if not the actual specifics. That is where we look at things like, how long does it take to get an appointment, the friendliness of the brochure, the helpfulness of the membership services, ombudsman complaints and that sort of thing. So we don't do a lot of compilation of that across plans, but we do use it as a quality improvement technique with individual plans.
I would also mention plan initiated data. There are plans, some of them are quite sophisticated at bringing forward data particularly around formulary issues. Hepatitis C was the latest one that we were needing to look at, the need for risk adjusting. Clearly, when they bring data, it is self serving, but they are bringing up problems which do collectively need to be solved, and they are more sophisticated and savvy than we are, and the challenge for us is to try to be a savvy user of that data, rather than a parallel collector of that data, and that is a new challenge for us.
Now to our failures. The failures in data collection, the big fat one is encounter data. Yes, we collect it. We collect patient, provider, location, service, diagnosis, procedures. We don't get drugs or lab data.
The problem is that it looked very much like a claim form, looked like the UB-92. In 1993, with the enthusiasm to use the savings from managed care to extend coverage to additional people, with the reluctance to take on legislators for infrastructure development, which they are very reluctant to do.
Our naivete, combined with this naive notion that could take a 12-year-old MMIS system and just tweak a little bit and we would be able to answer managed care questions, that was seriously wrong. When you change the incentives for submission of data, you need a totally new quality process. When they are no longer reporting it because they want to get paid, because in the year 2000 you're going to make capitation adjustment rates on it, it is just not there. The data is submitted differently, the quality is different, and so you need a whole new system for that.
When you change the field use, you need totally new cleaning and editing programs. I called just before I left and said, give me today's example. They said, today's example is that with the dental procedures you need the tooth number, and we were sticking the tooth number in, and this and that. There are a thousand of those things, and when you switch the way you're using the data, you need to genuinely and seriously and completely overhaul it. Particularly when you change the questions that you're asking, you're changing the relationships between the data and among the fields. To do that, you need a sophisticated database with flexible front-end decision support systems, with call routines that will routinely in standardized ways pull up your populations and feed them to a good biomedical analysis package, and you need to be able to do this for huge quantities of data.
I can give you a 40-page report on how it went wrong. I think the important thing to remember is that if you had visited us in 1996, we would have looked great. I think that is something to keep in mind as you make site visits; it is not just the data fields and what you plan to do, it is whether or not you really have a robust system that you can hammer on to ask these new kinds of questions and to work across fields that is critical and difficult and more expensive than we ever realized.
The second area of data that I think we failed on is the impact on our safety net clinics. We did not adequately see this coming, what the change in financing was doing. If you don't do the baseline, you can't do the follow-up. So we just don't have the kinds of accounting and auditing and utilization systems in our local health departments and in our safety net clinics, so we are relying primarily on anecdote, which is not a very good place to be on such an important issue.
Assessing outcomes and integration. I'll just go really quickly. Some examples that we have done, a mammogram study. In the absence of encounter data we have had to always do specific targeted collection, either through follow-up surveys or chart audits. We have done a phone survey of mammograms and found that the new eligibles doubled their rates of mammography and now just about equal the general population.
I would say that study was quite successful. Our EQRO focus studies have been around diabetes, childhood asthma, adult depression and neonatal care. These are chart audits, primarily looking for compliance with guidelines. I would say these have been mildly successful.
They work when we had good agreement in advance about what the measure was going to be. The ones where we didn't have good agreement, you get to the final result and all you're doing is arguing over methodology, and nobody is really listening to what you really found. We certainly can all agree that documentation is very poor, and so that is where we always start. At least we want to improve the documentation, even if you think the care is great, and we know it probably is not, really.
We increasingly are deferring to standards, to HEDIS standards or any other standard that we can find that has been tested. We rely a lot on the foundation of accountability, because we are not experts in developing and testing measures. So when we can defer to a standard, we do so.
I would mention something called Project Prevention, exclamation point, which is -- the way we have our 15 plans work together on a single issue for -- so far it has been a two-year stint, there are a lot of projects that we think are very important that are not cost beneficial in the short run for a plan, but they are cost beneficial for everybody in the long range.
Our first one was immunizations. We had our plans work together, building a statewide immunization registry and getting data ready for that. It is nearly completed. We have just started on tobacco cessation, and we will be using a statewide approach to improving, we hope, providers assessment advice to quit, and putting some meat to that notion of helping smokers quit and helping users quit.
Plans also work from a menu of five other areas. They work on those alone and then have to report best practices back to us. But we have not done a very good job with data on those.
The last one I would mention is our assessment initiative, which is a new CDC grant that we are very excited about, which will help us fund some joint positions, both in the health department and the Medicaid agency and joint report to a committee of those two agencies to work on the encounter data. Also we will be giving a behavioral risk factor survey to our Medicaid population, birth and death matching, which we should have done under fee for service and hadn't done.
There is one point I would like to make about what we have learned from doing these studies, that is, the parallel to what you have to do in terms of your data system. Not only do you have to overhaul the information system, you have to overhaul the analysts.
The plans are very good at using encounter data for cost analysis, but the number of plans that are really using it for population management. I know I can count it on this hand. The Medicaid agencies are well intentioned, smart folks, very skilled at auditing, but only one of them has actually been trained in epidemiology, and our public health analysts are very good at vital stats and very good at surveys, but they just get glazed eyes when we tell them that the first thing you've got to do is create an eligibility based denominator, and by the way, not a single variable you want to analyze exists; you've got to create the whole file.
There is only one of these three triangles that is paying a salary that can attract the people that can do all three things, and it is not state government. So we feel like we really do need to retrain some of our existing analysts and help the acquire these skills, rather than trying to compete on the market for these folks that are in very high demand. So if there is anything in your efforts you can do to help us with the training for this, we would be grateful.
I think I've used up all my time, so I'll stop there.
DR. IEZZONI: This was really very informative. Are there any quick questions to clarify points? No? Okay, moving east.
DR. RANBOM: I think Nancy made my presentation.
DR. IEZZONI: Well, it is always good to have reinforcement.
DR. RANBOM: Well, I'll try to do that, and as quickly as possible.
DR. IEZZONI: Because there is convergence of ideas.
DR. RANBOM: I'll focus on some of the things that we're doing that neither Neva or Nancy have already talked about.
This is an overview of what we do to evaluate and monitor managed care plans. I think you have heard something about just about every one of these things, except for a few: consumer satisfaction surveys, encounter data, and clinical quality of care studies. That is what other people have called focused review or external quality review.
We rely on those top three things to do most of our monitoring and evaluation of managed care plans. Then some of these other things are also very valuable, and I'll try to go through at least some examples of every one of them.
Before I get really deep into this, I wanted to give you a little bit of background. We are a Medicaid program which enrolls about almost 400,000 people in managed care plans. They are TANIF eligible and/or SOBRA, which we call Healthy Start in Ohio. So it is mostly a maternal and infant and child population.
We have contracts with 14 managed care plans in the state. We have every urban, county -- every major metropolitan county in the state has mandatory managed care enrollment, and we cover about 60 percent of the TANIF and Healthy Start population in managed care.
As far as data collection goes, we typically have a lot of information about enrollees. It is a regular part of our enrollment process in the Medicaid program, enrollment spans, dis-enrollment, information about eligibility category, demographic information. We create diagnostic categories of patients by analyzing the information from encounter and claims data and attaching it to their enrollment information, so that we can use that to do clinical studies about those populations.
We do geocoding of our enrollees, so that we can do geographic mapping and analysis of that data. I'll show you a little bit of that in a moment. For managed care plans, we have a primary care physician registry, and that allows us to look at the capacity of managed care plans, how many enrollees they can logically handle.
We also have a problem that Neva talked about. That is that a lot of the managed care plans that we have in our state have providers that don't want to be Medicaid providers. They don't want to be associated with a Medicaid program. They will not accept a Medicaid provider number. So it is really difficult.
What we do is, we assign them reporting numbers which they never see. The plans themselves, the reporting numbers are internal to the plans themselves. So they translate whatever their plan number is to the Medicaid reporting number. We have tried to do this to get around some of the problems that some of the other states have had.
Here is an example of how we use some of our data to do geographic data analysis. This is Cleveland, Ohio, and this is one particular health plan called Qual-Choice. What we have done is mapped the location of all of the enrollees in the Qual-Choice plan, and coded them by how close they are to their primary care provider.
You will see that the green shows people who are within three miles of their primary care provider. In this case, this plan, 33.9 percent were within three miles, and 45.6 percent were within five miles.
You compare that to another plan. This is Personal Physician Care, and 94 percent of that plan's enrollees were within three miles of their primary care provider. This happens to be a plan which at its core is the community health center system in Cleveland.
I'll run quickly through some of this information. We have information on voluntary dis-enrollment. I'm just trying to take you through quickly a very quick look at a Medicaid program at ground level and what data we use to monitor our health plan.
This is a PCP capacity report. We also have been doing consumer surveys for quite a long time. Right now, the Medicaid consumer assessment of health plan survey is our core data set. We have additional domains that we have added, health status, behavioral risk factor questions are also in there. Our survey on CAPS is currently in the field. It takes about 15 minutes to administer, and the vendor reports that most Medicaid recipients are really eager to respond to this. Once we get hold of a Medicaid recipient, they are very happy to provide us with the information.
One interesting thing is that we've got a lot of Russian and Romanian immigrants in the Ohio area, so they have hired an interpreter to administer this questionnaire in Romanian and Russian. I assume that there are a lot of different Russian dialects, and it would be difficult to do that.
One state's major failure is maybe either another state's success or another state waiting to fail. I'm not really sure where we are with this, but we like to think that we have done pretty well with encounter data, particularly since we have only been doing it for a year. We are basing performance measures on some of the encounter data that is being collected.
Early on, we had about a six-month process, where we negotiated with the managed care plans, what information we were going to collect with encounter data. We tried to work through all the problems. There are still some that we haven't solved yet, but that is kind of a day to day process that we work on.
We collect information in UV-92, HCFA 1500 and NCPDP. That is the proprietary prescription drug reporting format. We don't require everything, just selected data elements, and I don't want to go through that list. We also do these clinical quality of care studies, using both encounter data and medical record reviews in tandem.
Some of the areas that we have looked at are very similar to other states: prenatal care, childhood asthma, comprehensive exams, dental care, antibiotic use, case management. There is more, but we change what we do from year to year, because managed care plans get good at the things that we look at, and then if we keep looking at those same things, we really haven't done anything, we haven't improved care. So we work with plans to decide what areas we are going to focus in on.
We have also very recently started to do case mix analysis with our encounter data. We have been using a very traditional age/gender analysis to do capitation of managed care plans. Now we are looking at encounter data to see what kind of variation there is in case mix, using their own encounter data.
This happens to be using the disability payment system model that Rich Kronic has put together for the disabled population in managed care. Now, this isn't necessarily with a disabled population, although you will find that there is quite a few children and adults in the TANIF and Healthy Start programs that are disabled, and this reflects a lot of that variation.
Incidentally, we had one plan that decided they didn't want to contract with us anymore, because we didn't offer them enough money. Very interesting, it turns out that it is the plan that had the highest case mix.
We also have scads of performance measures that we use, and we have been benchmarking them against our fee for service delivery system. This is initiation of prenatal care for an HMO. It is within 42 days of HMO enrollment or within a month of conception. If you look at the HMOs versus our fee for service system, most of the HMOs are doing better than fee for service. There is kind of a mix. In a lot of cases, the fee for service system does a little better and some HMOs do better.
I'm going to flip through a couple of these, and look at the low birth weight results. I wanted to emphasize that this information comes from encounter data merged with birth certificates and our recipient master file. We use all three sources of data to make these calculations, both with our fee for service system and our encounter data for managed care plans.
Some results from our external quality review study. There is some information that you can't get from encounter data. Here is one document, patient, of risk assessment during prenatal care among patients enrolled in a managed care plan. It shows considerable variation among plans.
Here is percent of asthmatic children ages one to 19 with an emergency room visit or a hospital admission. This comes from encounter data. Then we have on the other hand a measure of consistent lung assessment among asthma patients enrolled in a Medicaid managed care plan. You can't get that from encounter data, it comes from medical record review.
We use as I said encounter data and medical record review together. Here is an example of that, with respect to well child visits and health check. Here is a situation where we started with an encounter data analysis and we asked the question, did the enrollee have a code for a well child visit, including a health check in any visit during the study period. We found that only 40 percent had the documentation on encounter data for health check visit.
We then went back and looked in medical records and found that of those that said no, the condition was no, we found in the medical record evidence of a complete health check during the study period. So here is a situation where encounter data wasn't very valuable. Going back, we looked where an additional 54 percent of the population had health check visits. On the other hand, we saw plans where there was documentation of a health check visit, but in fact no health check actually occurred. So we have been able to do an adjusted rate of health check visits based upon averaging that information.
This data shows the adjusted rate of well child visits among enrollees.
I can make a few points about encounter data versus medical record extraction. Encounter data in our mind is less costly to collect, although a higher rate of coding errors and omission errors has been talked about.
We think that there is a problem with managed care plan administrative system capacity. We have had a large variation in that. We have some plans that when we started, couldn't provide us with CPT codes, and could only provide us with one diagnosis code, even though the hospital or the physician provider was coding to the full specifications that we had in the Medicaid program. They just weren't able to accept all that information.
So a lot of the managed care plans that we do business with had to make major system changes in order to comply with our requirements. Encounter data is reliable for some measures, for some it isn't. Don't try to do immunization rates with encounter data; it is a futile endeavor. It is good for looking at resource utilization measures for capitation rate analysis. You saw what we have been doing with that. Medical record extraction is expensive to collect.
There is a high degree of inter-rater reliability, especially if you work at that and provide you some real valuable information. On the other hand, we have had a lot of problems with plans failing to produce records. In a lot of cases, that is a situation where they have a contract with a hospital that has multiple clinic sites, and they don't know what site to pull the medical records from. As I have talked about earlier, the medical record extraction is good for measuring the clinical processes of care.
You asked questions about we did with penalties and sanctions. This is our policy for non-compliance with reporting requirements. We have standards for timely submission of encounter data and for minimum claim volume. For medical records, we have standards for timely submission and standards for failure to provide.
What we do if the plans don't meet the standards, we issue what is known as a corrective action plan. If they don't -- while the corrective action plan is in force, we deduct five percent of the managed care plan's capitation payment during the period that the managed care plan is non-compliant with the cap. We hold it in an escrow account, and when they meet the corrective action plan, we give them their money back. It is a pretty effective sanction.
You will see that some of the measures that I had in the data analysis slides that I showed included some perinatal outcome measures. Because it is primarily a maternal, infant and child managed care program, it is difficult to look at chronic disease outcomes. So we concentrate mostly on perinatal outcomes, low birth weight, preterm delivery rate. We have a risk adjusted C-section rate. We look at the rate of fetal alcohol syndrome, because we have a program which requires patients who are assessed with alcohol problems during pregnancy, we require them to be referred to an alcohol and drug addiction program. We look at neonatal readmissions. We also look at hospital admissions for ambulatory care sensitive conditions.
I'll talk about a minute about our decision process for data collection. It takes a lot of leadership and vision to be able to do this work correctly. I think our leadership in the Office of Medicaid was extremely forward looking. We hired about nine people outside of the agency to come in and manage this data collection and analysis process. We created new job descriptions with higher pay levels. We attracted people with clinical experience and service provision experience and managed care experience, and then we taught them how to do data analysis with extremely large data sets.
That has been a really successful program for us. It also involved our staff understanding some of the technical innovations and medical informatics. We go to the AHSR conferences, we read all the journals, we are looking at all the latest products that are out there on the marketplace, and we try to implement them as soon as we can. We try to be part of some of the analytical studies that are being done. We include consumer advocates and providers in managed care plans in all of our decision making processes.
You asked a question about data collection in managed care versus our other programs. Our approach is to have equivalent measurement processes for both managed care plan and data collection. Everything that we do in managed care we are also doing in our fee for service delivery system. Consumer surveys, targeted clinical studies and you saw that we do encounter data versus fee for service comparison.
The last two slides, logistical impediments. I mentioned before the variations in the capacity of managed care plans in administrative and clinical data systems. That has been a real problem for us. Limitations in the UV-92 and HCFA 1500 electronic formats for transmitting data elements. We found that there are certain data elements that reflect what a managed care plan has done versus what a provider has done, are difficult to transmit on these formats.
For example, how much a plan paid and what day they paid it, whether they denied a claim, those kinds of things can't be -- there are no places to transmit that information on these formats. What was mentioned earlier today is the length of enrollment of Medicaid managed care enrollees.
I was trying to think up some recommendations for this group, and I came up with three. One, consider national review of the ICD-9 and HCPCS CPT-4 coding system with the intention of making changes that will enhance our ability to measure delivery system performance. How is that for a recommendation?
Well, honestly, there are some codes that are in the CPT-4 coding system which are really impossible, very difficult to use. There are prenatal care codes which span eight to 12 visits. I don't know how to evaluate that when I see it in an encounter data report. So we see that even though we have told plans not to use those codes.
Review the state of the art of managed care plan administrative and clinical data systems and make recommendations for their improvement. Consider the idea of standards and system certification, just like NMIS systems were certified and I think continue to be certified.
Consider changes to the electronic format for health care transactions that will accommodate data transmitted from managed care plans to purchasers.
I think I'm done.
DR. IEZZONI: Thank you. Any very quick questions to clarify any confusion?
DR. AMARO: I just had a question. You mentioned that women are referred who are identified as having an alcohol-related problem, and they are referred to a treatment program. Is that a treatment program that is paid for by the managed care companies or that is outside that system?
DR. RANBOM: We have an interesting --
DR. IEZZONI: Hold on, you need to answer on the mike.
DR. RANBOM: We have an interesting bifurcation of our mental health and substance abuse programs in the state. We have managed care plans that are responsible for delivering those services, but we also have a community mental health and a community alcohol delivery systems which are outside of the managed care plan.
So they can be referred either within the managed care plan or into the community alcohol and drug addiction system. It is one of those examples that I can't remember, he was the first speaker, talked about around cost shifting and behavioral health.
DR. IEZZONI: Any other really quick questions? All right.
DR. BREWER: Ready for the cleanup hitter?
DR. IEZZONI: We are.
DR. BREWER: Well, I am delighted to be here this afternoon. I admire your perseverance in sitting through all of these presentations. I hope you are up for one more here.
DR. IEZZONI: They have been great.
DR. BREWER: Yes, I think they have been excellent presentations. I really enjoyed them myself.
Well, all the members of the subcommittee should have a nice red folder in front of you. If any of you are college football fans, you might appreciate why I would have selected a red folder. Nebraska's colors are red and white.
DR. IEZZONI: Will you enlighten the rest of us?
DR. BREWER: That's right.
DR. GREENBERG: As executive secretary I had requested blue.
DR. BREWER: That's right, but we know that if Nebraska had played Michigan, we would have won. But anyway, that's another story.
Inside the folders, hopefully we have copies of the handouts. Barbara, were you able to make those? Included in your folders are first of all, the outline on the right side is basically what I am going to be speaking off of. So I'm going to skip around a little bit, basically follow the outline, but I'm not going to cover every line. But that is the information I'm going to be discussing with you today.
On the left side of your folder, I'm talking to the subcommittee members here, and hopefully the audience has these as well, is that organizational chart for the Nebraska health and human services system, a report to the legislature, which includes a general description of the Nebraska Medicaid managed care program and then a list of the data elements that we have in our encounter data system.
I actually hadn't originally planned to say too much about myself, and I won't, but just by way of introduction, I'm an epidemiologist. I actually work for the Centers for Disease Control. I am on assignment to Nebraska, and have been there for all of seven months now as the state chronic disease epidemiologist. Much to my surprise, I have ended up spending a lot more time on Medicaid managed care issues than I had anticipated that I would be spending, and it actually has been really a very fascinating involvement for me.
I'd like to start out by just sharing with you a little bit about the Nebraska Medicaid managed care program. Then after that, I'm going to go into discussion that hopefully will address most of the questions that you had asked panel members to address.
By way of introduction then, let's just take a quick look if you will at the organization chart. I'm not going to spend much time on this. All I really wanted to show you is that the Nebraska health and human services system includes three what we term agencies. One is a service agency, the second is finance and support, the third is regulation and licensure, which is actually where the epidemiology area is located.
The Nebraska health and human services system actually just came into being in January 1 of last year. It was essentially formed as a super agency, by consolidating five agencies down to three. I think that is important to mention. In our case, it actually has facilitated collaboration between the former health department and the Medicaid program, and that is not necessarily true in all states, although certainly Ohio and Oregon are good examples of states where public health and Medicaid have been collaborating well. But I think in our case it has worked that way.
There are a total of a little over 103,000 Medicaid managed care enrollees in Nebraska. The budget for the Medicaid managed care program is approximately $76 million. Personnel and staffing, there are only five fulltime staff assigned to the Medicaid managed care program.
I think that is important to realize, because when we start talking about doing data manipulations and so forth, you have to realize the small capacity that you're dealing with, not just in Nebraska, but in a lot of states. It is a real problem, in my opinion.
We have an enrollment broker, which is the Lincoln-Lancaster County Health Department. Although there are a lot of similarities in issues that we are facing in Nebraska to other states, there is one important difference that I would highlight here. We don't have a strong local health department structure. There basically are two full service health departments in the state. One is for the health department that serves Omaha, which is Douglas County, the other is Lincoln-Lancaster, the enrollment broker. Other than that, we really don't have a local health department structure in the rest of the state. I think that is important in terms of some of the discussions we were having on the changes in local health departments and their involvement in direct clinical services.
The data manager that we use for the Medicaid managed care program is the Medstat Group. There are two main components to the Medicaid managed care program in Nebraska. The first focuses on medical and surgical services, and that began on July 1 of 1995. That is specifically focused on the Medicaid populations in Omaha and in Lincoln. There are actually two counties that comprise the Omaha metropolitan area, and then one in Lincoln. That actually constitutes about half the population of the state, and a similar proportion of Medicaid enrollees. So it is about half of Medicaid enrollees that are potentially covered by the medical/surgical component.
We have what I guess is called mixed model, managed care plans. We have both capitated plans, and there actually are two capitated plans that Medicaid managed care enrollees can select from. Then we have one primary care case manager plan. The two capitated plans are Mutual of Omaha, surprisingly enough, and United Health Care. Blue Cross/Blue Shield is the primary care case manager.
As you can see, the enrollment for the medical/surgical component, total enrollment is about 33,000, and it is about a 50-50 split between the DCCM model and the capitated plans.
On the mental health/substance abuse side, that began actually a couple of weeks after the medical/surgical component. I think it was supposed to roll out simultaneously. That is statewide. We have one managed behavioral health plan, Options, that provides managed behavioral health services.
The enrollment figure there, the 103,000, basically is inclusive of all the Medicaid enrollees who are eligible for managed care statewide.
Our covered population -- and in this, I draw your attention to the report to the legislature, because that has got some specific details on the criteria for determining whether or not somebody is what is termed Medicaid managed care mandatory or not, because not all Medicaid enrollees are required to participate in managed care in Nebraska.
The three main groups that are under the Medicaid managed care mandatory are similar to the other states: Aid to Families with Dependent Children, the aged, blind and disabled, and then state wards. I've been told that that inclusion of state wards in the managed care population is different than what exists in a number of states.
The Medicaid eligible population is specifically excluded from managed care. It includes Medicaid recipients, nursing home residents and certain children with disabilities. There are others as well, and that is listed in that report to the legislature.
I think that is significant, by the way, because in Nebraska, nursing home care is a substantial proportion of the Medicaid budget, and they are not covered by the managed care program.
Moving on then to the specific questions that you wanted us to address. You asked what questions we have about our managed care program. We have several, and I'll just highlight a few of them. They are similar again to some of the things that have been mentioned by other states.
First of all, we would like to know, has the managed care program provided better access to services. For example, are there more participating physicians in managed care than there were under fee for service.
The other thing of course is, we would like to know if it is cost effective. That is, are aggregated capitation payments under managed care less than fee for service. And of course, we would like to know, has managed care improved the quality of care for Medicaid enrollees. We would like to know if providers are satisfied with managed care plans, and we would like to know if clients with special needs are being appropriately served.
Two very important issues that we are concerned about as Medicaid managed care expands in Nebraska are first of all, access to participating providers. This is kind of an interesting one, I think, in Nebraska, and probably in a number of other predominantly rural states, because access here really means different things in different areas of the state.
In the case of Omaha and Lincoln, the issue of access revolves largely around the willingness of providers to participate in the program. In the case of the rural and frontier, western ends of the state, it has to do with whether a physician is available.
That is really a major problem. We have 19 counties out of 93 that don't have a fulltime primary care physician. So in those states, issues of managed care as we start talking about expanding that, really runs head on against issues of availability of providers and access to primary care services.
The other thing related to the access issue in certain respects is whether there is technical expertise at the local level to do data collection. That would include things like coding, computerized billing, a variety of different activities along those lines.
The data that we feel are required to answer the questions are basically the kinds of information that we are trying to collect with varying degrees of success now, including encounter data, client satisfaction surveys, provider surveys and enrollment surveys.
As far as data collection activity then within the program, and one of your handouts here provides a list of minimum data elements for the encounter data system, we have first and foremost encounter data. Similar to what Lorin said about Ohio, we certainly perceive the encounter data system to be an important cornerstone of our quality assurance activities. However, I would have to say we are just getting started in capturing that information, and we have had quite a bit of problems getting plans to comply in reporting it, as I'll talk about in a minute.
Actually, listing demographic information under encounter data isn't technically correct. Encounter data actually comes from a separate demographic information, it comes from a separate file, a client file. But I was considering it as part of the claims information that we get on an encounter.
Other encounter data, as you can see from the list that you have, include diagnostic and procedure codes. The data flow here is that we are a little different perhaps than other states. The system, as much as it contains a capitated component, still from the provider perspective operates more or less on a fee for service basis. So the plans, Mutual of Omaha and United in the case of medical/surgical services, have contracted with the state and are at risk for providing a range of medical/surgical services. They in turn are working with the providers or are contracted with the providers, but the providers still bill by and large the intangible core.
There is really only one exception to that, which is a staff model HMO that exists in the Lincoln area. But other than that, the arrangement between the providers and the managed care plans is fee for service, and I think that that has some real implications for data collection.
So in essence then, the plans get the claims forms from the providers, and then they package that as encounter data, which in turn is sent on to the state.
In addition to the encounter data, we also have member satisfaction surveys, as were mentioned before. Our external quality review provider is the Iowa Foundation. They use the CAPS survey, which has already been discussed a bit, or will be, I should say. That includes questions on access to care, availability of providers, the time it takes to make an appointment, et cetera.
Member satisfaction surveys are also done by plans. That of course is part of the NCQA accreditation process. We have not seen any of that information from plans so far. So hopefully we will in the future, but we have not received the results of those surveys.
In addition to the member satisfaction surveys, we do enrollment surveys. Those are administered by the enrollment broker, which as I mentioned is the local health department in Lincoln-Lancaster. It includes a core assessment and health status questionnaire. It is used to obtain information on functional status, which I know is one of the things that you are particularly interested in. There is a real effort as part of the enrollment process to match clients with services based on the needs that are identified during that enrollment process, which I think is very good.
As far as provider data, we do have a provider survey that collects information on provider characteristics and their experience working with health plans, including handling of claims, the credentialling process, referral process, that kind of thing.
We are interested in doing comparison to fee for service. The data manager, which as I mentioned is Medstat, has claims data from the Medicaid program; they are pre-managed care. These data then are compared to encounter data, or will be compared to encounter data, I should say, in the three counties where we have the medical/surgical component, and then statewide for the behavioral managed care.
Plan modifications in data collection. The major issue that we have had to deal with in Nebraska with respect to encounter data in particular has been getting compliance with reporting, and that is something that other states have certainly mentioned before as well.
We will consider adding some additional data elements as we get a little more experience using the encounter data system. But the big problem has been getting plans to report the information.
As far as data decision making and data strategy, you asked about the methods used to decide what data will be collected. We had a work group that decided what claims data they felt were critical. That group included the Medicaid administrator, medical director, claims specialist and others.
We do have some good news to report, actually, in the encounter data arena. The first two and a half years of encounter data have been submitted by the two capitated plans. As I mentioned, those are Mutual of Omaha and United Health Care. Those data have been run against edits that the Medicaid program uses. While I didn't mention the plans by name, approximately 90 percent of the data from one of the plans passed these edits, and about 80 percent of the data from the other plan passed edits. The plans are responsible for correcting any problems with the data.
We have not yet received any encounter data from the managed behavioral health provider, and that has been a problem for us. As you may recall, that program went into effect two and a half years ago, and we really don't have any information on that side.
The Medicaid program sends Medstat a monthly data tape, which includes encounter data, claims data and data on providers. They warehouse that. I mentioned 30 months there in the handout. There are 30 months of data that are readily available for analysis, or is in an analysis file. The data beyond that actually are archives, if that information is lost. They also provide us with decision support software, as it is called, that we can use for making queries against that database.
You can see the three surveys where we do primary data collection. As far as gaps in currently available information, I think as one of the speakers this morning mentioned, the biggest concern that we have has to do with the completeness of reporting of encounter data. I think I think it is very possible that plans don't submit claims or potentially, providers don't submit claims if they don't have complete data. We really don't know how complete the reporting of that information is, and it is something that we really feel we need to look more closely at.
We also have very little data on what I would term cognitive services. I think that has been alluded to a little bit before, things like diabetes education, for example, which are a very important part of diabetes treatment guidelines that we have been involved in developing, information on counselling and smoking cession, which is another example of something that we really would like to get a better handle on. We also don't collect information on external causes of injuries.
How are these gaps going to be filled? We plan to evaluate the completeness of reporting into the encounter data system; I'll talk about that in a minute. We plan to expand the member satisfaction surveys to assess the delivery of some of these cognitive services. In fact, as you may know, NCQA has already moved to do that. In their member satisfaction surveys they have information on counselling about quitting smoking, for example. I think that may be a good place to collect information on diabetes education as a way of getting a handle on that, just to give another example.
Outcomes of care. I'm going to jump ahead a little bit here. Re-admissions for the same condition is something we are interested in, delivery of preventive services like immunizations, and rate of low birth weight babies, are three areas that we are particularly interested in.
The calculation of performance goals. We do have specific performance goals related to things like access to covered services, some of the ways that I think Lorin and Nancy both talked about, availability of primary care physician staff training and the establishment of a quality assurance program. These performance goals are all detailed in the contracts that we have with managed care plans.
The measurement of these performance goals is largely based in self reports, but we are going to be doing audits of the plans, and some of those audits will be gone by the external quality review organizations, some of them will be done by QA staff within the managed care plan.
We also will be calculating HEDIS measures, in fact, have already moved to do that using encounter data.
You asked about the value of data standardization, focusing specifically on encounter data. Certainly, it is our feeling that that would be very valuable. We feel it would permit state to state comparisons of data, and it also I think very importantly would insure that encounter data meets a variety of different needs.
I'm coming at this from a public health perspective, in reviewing some data guidelines put out by for example the American Public Welfare Association. I was rather amazed by the fact that there was really very limited, what I would term health information, very limited diagnostic information, very limited demographic information. I think it would be very helpful to have data standards that take into account the needs of multiple users. I think this encounter data, as other speakers have mentioned, is going to become increasingly important for quality assurance and public health practice in the future.
Logistical impediments to collection of data. A number of these have been touched on before, so I'm not going to spend a lot of time on this. But I think an important one that may or may not have been mentioned is this first one. Quite frankly, collection of encounter data has been a relatively low priority for plans.
In certain respects, I don't think that is terribly surprising, because in a state like Nebraska and many states, Medicaid managed care is new. The primary focus has been getting services up and running for people, and data collection activities have been a relatively lower priority.
Collection and reporting of encounter data is expensive, member satisfaction surveys can be quite difficult to administer in the Medicaid population. Some of this has been alluded to before. You've got a population that is quite transient, many of them don't have phone numbers, if you're talking about telephone surveys, of course, so it is not easy to do surveys in this population. As we are anticipating doing behavioral risk factor surveys -- and Nancy, this applies to you -- your response rate is inevitably going to be a lot lower. So it is a challenge to do that.
I think another couple of points that I would like to mention as far as logistical impediments that I'm not sure have been touched on before, I think there is an issue of orientation here. Public health and Medicaid programs in general have not worked that closely together. I think in certain respects, perhaps the other two states in the panel are an exception to that. But in many states, there hasn't been a lot of interaction between Medicaid program and public health.
As a result of that, I can say in our state, public health data systems haven't really been designed with the goal in mind of being able to link with Medicaid data, so we don't get source of payment information, for example, in birth certificates. So when you go about trying to do some studies to look at completeness of reporting into encounter data, linking hospital discharge data for example with encounter data, which is something that we are looking to do, you have to fish around, to try to find what variables you're going to use to do that linkage.
So I think there is an issue of orientation there. I think that is something that the committee might really be able to help us with.
I'm going to skip over cost of data collection. You can see that. One thing I would like to touch on with confidentiality has been mentioned a couple of times before. Confidentiality issues, we haven't encountered much resistance in our state to reporting data based on concerns about confidentiality. I don't know if Nebraska is just different than other states in that regard, but that hasn't been a really up front concern.
The Nebraska Medicaid managed care program will not report individual level data, and the plans are restricted in their contract for reporting that information. But we haven't heard that confidentiality issues are a major impediment to collecting our encounter information.
I think the area of collaboration, the next point here, has been a success story for us. Part of it, as I mentioned before, has been aided by the reorganization that we went through as a state. There is some new leadership in the Medicaid program. The person who is the Medicaid administrator actually comes out of a public health background, which I think helps quite a bit.
We have established a Medicaid managed care work group, which includes representation from the Nebraska Medicaid managed care program, as well as public health. The managed care program also has been involved in collaborating with other programs in the health department, notably the diabetes control program. We have had a very successful effort going on now to develop diabetes treatment guidelines.
In fact, collaboration with the diabetes control program is written into the contract for managed care plans, so they are expected to collaborate with the diabetes program, they are expected to collaborate with the immunization program, and in fact, they have, at least on the diabetes side, and I think quite successfully.
The identification and resolution of gaps inconsistencies in data collection. As I mentioned, we have performed data edits to identify errors in reporting of data. We are moving forward on planning a study, as I mentioned before, to try to look at this issue of completeness of reporting, which is a major concern for us.
Enforcement of data reporting requirements. We do have sanctions in the contracts with the plans for non-reporting of encounter data. Plans are supposed to report data to the managed care program on a monthly basis. By contract, the managed care program can withhold two percent of the capitation payment per month for failure to report the data, and that money is not refundable; that is lost.
In extreme cases of non-compliance, the managed care program can terminate the contract with the managed care provider. Now, that is what is in the contract. The reality is that those sanctions haven't been used. I do think that we are getting close to starting to use them. There has been leniency on the front end because it has been a startup program. But they have not been enforced so far.
Are they suitable? Yes, I would say they are suitable if they are used, but they haven't been used. I think another important point though that I'm not sure has been touched on a little bit is that, in addition to the stick approach, I think another very important issue is looking at how we can more effectively collaborate with plans.
One of the ways that we have attempted to do this in Nebraska is actually by having our data manager, Medstat, go out and meet with the plans, basically do a consulting project, review their data collection procedures, and try to see if we can provide some assistance to them, and seeing how we can improve those data collection procedures.
I think another thing that we can do is provide reports back to the plans, provide quality indicators back to the plans, and other information to show that this information isn't just going to a black hole.
Recommendations. First and foremost, and this has been touched on before, Marjorie will probably argue with me about this, but I think we need to develop consensus standards for encounter data.
DR. GREENBERG: You're kidding.
DR. BREWER: No, I'm kidding. Address the needs of multiple users, and I have already touched on this before. I think very importantly, though, even if we move forward in developing standards, there has to be flexibility to allow states to add specific data elements that they find useful.
I think frankly that is one of the beautiful things about the behavioral risk factor surveillance system; you've got a core data set that you've got the flexibility to add additional information. I think the same analogy could be applied to standards for encounter data.
I think we really need to study and report on methods to promote linkage of public health and Medicaid managed care data. I touched on that before. I think that is something that the committee might be able to provide some assistance on.
I think also, guidance on methods to assess data quality. I think that is a huge issue, and we have already talked about that a little bit.
I think I'll stop there.
DR. IEZZONI: That's a lot. The whole panel was a lot. Are there any comments? Elizabeth?
DR. WARD: A question I think to Mr. Ranbom, but if any of you could comment. You showed us a lot of charts that compared plans. Do you get that information back to the plans, and in what form? And do you ever do any of that publicly in terms of how they compare to each other?
DR. RANBOM: Well, you will see that some of the information that we had there compared plans by alphabet. That is because it is in a free publication stage right now. It is back with the plans for them to review and decide whether there are any problems with their encounter data. At some point in the future, probably within the next couple of months when they have had the opportunity to resubmit data that they have had problems with, the information becomes public.
We won't be running around the legislature with the information, but there are advocacy groups and community groups out there that have a strong desire towards holding these managed care plans accountable in their local areas. They are really relying on us to provide information to them.
DR. WARD: So you won't do a report, but it was available for people to request?
DR. RANBOM: No, we will do a report, yes.
DR. WARD: But you won't go out and publish that?
DR. RANBOM: Well, we'll probably publish it, but I think at this point, some of the data is suspect even when the plans give us information back, and so we will probably put a lot of caveats on the information, at least in the first year, and probably won't be relying too heavily on the encounter data. But I think the plans will know that the encounter data is out there, that the performance measures are out there.
We do take it seriously, and we expect from this baseline year to see considerable improvements, both in the accuracy of the encounter data and the actual performance, too.
DR. WARD: Do you do any public release of your satisfaction surveys?
DR. RANBOM: Yes. Yes, we do. The external quality review report is also released publicly, and that kind of thing.
DR. IEZZONI: George, then Vince.
DR. VAN AMBURG: When a provider resubmits data, because they don't like what is in the report, and it radically changes the report, what do you do? Do you just accept it? Do you go back out and audit the records?
DR. RANBOM: Well, usually what we have found is the reason for the change. For example, a managed care plan will be -- some managed care plans will be using local coding for a certain kind of visit. There will be a reason for it, so we will track back what that reason is. A lot of these changes in the data will reflect discussions that we have had with the managed care plan about their coding practices.
DR. IEZZONI: Do the other two panelists want to comment on that?
DR. CLARK: Our experience is very similar. The satisfaction data is pretty clear, since it is a uniform instrument, and we do publish that by plan. But the quality data, we have far more arguments over the quality of the measure, and those have not been released to date by plan. We will be publishing the first HEDIS data for our commercial and Medicaid populations next year.
DR. MOR: You led right into my question. Throughout the day, people have periodically said, we are using a HEDIS standard or taking a measure that has been tested. We know it is quote-unquote reliable. At least there is some consensus that it is -- people agree on what it means, and people are out there writing plans.
Lorin, your data on the proportion of encounter data, of the noes found to be yeses, and presumably that is only the tip of the iceberg, as it were, speaks to the issue of basic validity of the data on which all of these HEDIS measures are therefore constructed, particularly to the extent that they are actually an aggregation of aggregations, because the plans then aggregate amongst and across different subunits.
How do you think about working with that, or publishing these data when you haven't actually analyzed the validity of each one of those measures? What is your standard of validity?
DR. RANBOM: Well, for each one of our measures that we are using, actually all of the measures that you see there are not part of the managed care plans' contracts. There are some that are exploratory and some that we feel really good about.
The ones that we feel really good about, we have gone through the process of encounter data validation for all of the plans. We have taken a sample, 400 records or 400 eligibles in that population, in some cases it is less because the number of enrollees in the plan are like 200 or what have you. So we validate each one of the measures that we use.
When we feel comfortable about the measures that we are using, then we will publish them. We don't publish unvalidated measures.
DR. BREWER: I just wanted to comment on the calculation of HEDIS measures, because I think that is a really serious problem. And actually, we are hoping to do some work in collaboration with NCQA to try to assess the impact of using different enrollment periods on our ability to calculate HEDIS measures.
A lot of the problem that has already been alluded to here, to put it in epi terms, is a denominator problem. There are validity problems with the community data elements that I think need to be assessed by re-abstraction surveys, but at the heart of it I think is a denominator problem.
The problem I think is more than just the obvious, which is that there is a turnover, because the Medicaid population has some heterogeneity. That was one of the reasons I went through the enumeration of who was included in Medicaid managed care. We have got aged, blind, disabled population as well as state wards, that don't turn over a lot. So my feeling is that if you start to lengthen the enrollment period, you will be left with a population you can calculate some measures on.
The question is, how representative is that population of the entire population? I think that is what you really have to look at. It isn't just strictly the question of turnover. That is a serious problem.
On the other hand, the plans are having to have to calculate HEDIS measures increasingly as part of the accreditation process. So if you are going to determine a set of measures that are going to be useful to them, I think it is worthwhile to look at some of the benchmarks and try to see what we can do to modify HEDIS measures, so that it can serve a valuable purpose in the Medicaid population. But it is a big problem.
DR. CLARK: I would just say, as bad as they are, they are better than the ones we made up ourselves. In a state of committee volunteers, we are just not experts in measurement. So the more that bodies such as this can be at the table creating those, if that is what you mean by developing standards, then we are thoroughly behind it.
I think we get around it a little bit, and we sleep at night, by saying that our job is not to do report cards. That is really not what our approach is in Oregon. Our approach is helping everybody raise the bar constantly with quality improvement. So we always find the data are bad, then how are we fixing them, and what is the next step.
I would just like to throw out that this sanction issue in Oregon is very different from Ohio. The downside of collaboration is that everybody will take their marbles and go home if you try to enforce anything. So it is how do you just keep the improvement moving without having everybody leave, because it is going to be the same providers and the same patients, no matter how we pay for it, whether we just cost shift it or whether we pay for it through managed care.
DR. RANBOM: We have a different situation, where most of the managed care plans in Ohio are Medicaid only managed care plans, so they can't take their marbles and go home. They have to work with us.
DR. CARTER-POKRAS: Earlier Bob Grist had mentioned about the civil rights laws, and I was just wondering what each one of the states is doing to use the eligibility determination data. You mentioned a client file for instance on race/ethnicity to see if there do appear to be differences in the quality of care that is received, or in access to services. So I was hoping to hear about that.
Also, you had mentioned collecting information on the providers, whether that also includes the race/ethnicity of the provider and whether they can speak another language and where they are geographically located, and whether that also can be used.
So the first question is about, we were using the information on race/ethnicity of the client and whether we were using the race/ethnicity and other information about how well that provider can provide services to a very diverse population.
DR. CLARK: Oregon has a very small non-white population, so we don't give it as much attention as we would in other states. That doesn't mean it is good, that just means that is what we do. And certainly, on providers, the providers that are serving these populations tend to be clustered in our single managed care plan that was created among the federally qualified health centers. So the data are collected, but not extensively used.
DR. RANBOM: We have a relatively large African-American population, but extremely small Hispanic or American Indian or Asian-Pacific Islander population. The extent to which we do, we do a lot of analysis which can be cut by each of those groups.
I think that the problem is that a lot of people don't feel like that ethnicity data that we collect at enrollment is very reliable.
DR. BREWER: In Nebraska, first of all, the total population in Nebraska is 1.5 million, and well over 90 percent of that population is white. So one of the problems that we run into in dealing with some of the minority populations is again this small numbers problem that we have talked about before.
That said, there is certainly an effort to get information on race/ethnicity as part of the client file, as I mentioned before. I'm not sure about the provider survey, honestly. I don't know if there is information on race/ethnicity of the providers.
As part of the enrollment process though, there is an effort as I understand it to try to match clients with physicians who are able to communicate with them, or to assure that there are going to be translators available, if that is needed.
We do actually -- and I think this was mentioned in the case of Minnesota -- we do have a Southeast Asian population, actually a Vietnamese population, that has moved into the state, and there is actually a Latino population in a portion of Omaha, south Omaha, which is where a lot of the meat packing plants are. So although it is a small population, it is not insignificant.
There are some efforts, particularly at the enrollment phase, to try to appropriately match providers and patients. I have no doubt though that there are some problems with that, and I don't know to what extent there are Vietnamese providers, for example, who are really able to adequately serve that population.
DR. CARTER-POKRAS: If I could ask just a third question, I had forgotten it was on my list.
DR. IEZZONI: Quickly.
DR. CARTER-POKRAS: We haven't heard much about the children's health insurance initiative, but I was just wondering what kind of changes in your data systems that you anticipate if your state has decided to expand the Medicaid program as a way of dealing with the children's health insurance initiative.
DR. RANBOM: One of the things that we have had to change or that we have had to add -- because we are enrolling about -- expect to have about 130,000 new children enrolled in a children's health insurance plan, which is an expansion of Medicaid, is that we are going to be collecting significantly additional data on source of health insurance and insurance status.
We find when we move up into that population that is above 133 percent of the poverty level up to 150 and 200 percent of the poverty level, that a large number of them already have health insurance. So we have to be able to assess the adequacy of that health insurance or make them either eligible for our expansion program or the children's health insurance program.
Otherwise, I think one of our biggest problems is going to be moving that population into managed care, because we don't have enough data to determine what an adequate capitation rate for that population is going to be. So they will probably be in fee for service at least for a year before we move them into managed care.
DR. BREWER: In Nebraska, there is a lot of planning going on for the child health initiative. I will be very candid with you and say that I haven't been actively involved in that. I do know that we are supposed to be bringing on about 20,000 kids.
As I understand it, the population is going to be skewed more toward adolescents, because that is where a major gap in coverage is in Nebraska. I'm not aware of any plan changes, but again, I have to couch this in the fact that I have not been involved in these discussions. But I have not heard of any specific changes in our data collection system in anticipation of that change.
The biggest concern I have heard expressed relates to this issue of availability of providers. Again, I think that is going to be a particularly big problem as you move into rural areas in the state. So that is the big issue that I have heard a lot of people talk about, that and having sufficient staff in house to manage the program, quite frankly.
DR. CLARK: In Oregon we'll be doing what we have done all along. Whenever there is a new pot of money, we try to make -- from the client's point of view and provider's point of view, make it seamless with the existing program. My understanding is that there are some very creative streams that will be used to make the child initiative money fit that, but they are going to succeed in doing that.
I think the big issue is that we wish there were a greater proportion of it available for some infrastructure and core sorts of things. When we have got fewer than eight percent of our children uninsured, we would like to do some core outreach sorts of things, and the money is not very flexible for that.
But I would like to mention one thing about capacity, if I may. I am just green with envy of Lorin's nine FTE hired from outside at higher than state rates. I suspect that is not scalable. We are half the size of Ohio, but I don't think that means we need 4.5. I'm sure Bob couldn't get by with 2.8 or whatever, I can't do the arithmetic fast enough.
For small states, this capacity is -- when it was claims, and you had clerks processing claims, it was somehow related to the size of your program. But capacity to do data analysis doesn't fit that same model.
DR. BREWER: If I can make just one comment, you reminded me of something when you were talking about the idea of seamless services for the Medicaid managed care population. From a data perspective, I actually see that as a two-edged sword. This has been alluded to in some other discussions.
One of the things that we are really concerned about as we talks about doing data linkage with other data systems is that we are going to find that a lot of the Medicaid managed care population is not going to be listed in for example hospital discharge data as being Medicaid recipients, that they will be listed by the particular product that they are being served under.
Now, these products in the case of Mutual of Omaha or United Health Care, are Medicaid specific. So presumably we would be able to separate that out. But it all depends on how that is listed in whatever database we are trying to link to.
So again, I would submit that to the committee as an issue that I think needs further consideration. How do we deal with that? How do we classify these people?
DR. RANBOM: And a lot of time, when people are interviewed, they won't say they are Medicaid. They say what health plan they are with. They don't consider themselves as being in Medicaid.
DR. BREWER: Exactly, and there is a stigma attached to that as well.
DR. RANBOM: It is actually something that we are trying to build off of, actually, get people to participate more.
DR. IEZZONI: Cathy, did you have a comment?
DR. COLTIN: Yes, I actually had a comment-question. Warren had mentioned that there really needed to be improvements in some of the coding systems, UPT being one of them. I suspect that some of the inconsistencies that were observed in your data around the positive flag on an administrative system not being supported when you went to the medical record, are not only due to unique coding systems and plans, but some of the inadequacies in CPT for actually representing the kind of detail that you actually want to look for when you do the medical record reviews.
So a code for a comprehensive child exam, when you go to the medical record, may not have captured every element that you were looking for, but may have been actually correct in terms of how CPT would code that particular visit.
We have observed this a lot in trying to develop performance measures that were limited sometimes by the coding systems themselves and what you can collect, given the coding systems. Health plans haven't typically been a very influential constituency in going to the AMA and trying to advocate for changes in coding. I think that to the extent that states are payors, they are likely to be more influential.
I wondered if the other states have also encountered these kinds of problems, and the extent to which you have communicated any of your concerns about the inadequacies of the coding systems, not just to the AMA, but to those who are responsible for ICD-10 and so forth.
DR. CLARK: In Oregon, yes. We have an organization called ARC, which a number of the plans pooled their money for chart abstraction, so the trips to the doctor's office only happens one time for all of their different needs. That agency has been a very good advocate with the AMA for creating some dummy codes around things like diabetes and things that we needed to have created.
DR. BREWER: I just have to say, in Nebraska I haven't heard specifically about problems with coding. I'm sure that they are there, and I probably just haven't heard about them. But I think your point is very well taken.
DR. RANBOM: A lot of people don't understand who is responsible for making those changes. It seems to me like it is a shared responsibility between the AMA and HCFA and NCVHS. So it is hard to determine who to go to.
DR. IEZZONI: We have been tilting at this particular windmill as a committee for a very long time, and we share your bafflement.
DR. MOR: Actually, Marjorie, you might want to comment, but I think this committee and HCFA, all three of those recommendations that you have made have been addressed for relatively --
DR. GREENBERG: Have been on the table.
DR. MOR: Have been on the table, right.
DR. RANBOM: I didn't know that, by the way.
DR. GREENBERG: I might just mention, certainly the whole issue of procedure coding and involvement in standards, et cetera, I'll just mention that NCHS and CDC is actually holding a planning meeting a week from Friday with people within the department and then people from state organizations and a few state health departments, who put together planning committees and health services researchers and other public health groups, to talk about what the implications of HPPA are for public health and health services research, with the goal of having a workshop later in the year to bring in different constituencies who obviously use these data and see it as a cornerstone of a lot of their efforts and yet have had no input into what elements are collected, how they are coded, anything of that type.
Our initial focus is on the National Uniform Billing Committee, which maintains the UB-92 and the National Uniform Claims Committee, which maintains the HCFA 1500, and the chairs of both of those committees are going to be at the meeting. Basically, in the spirit of HPPA, I think both recognize -- I can't speak for them, but they I think have said this publicly and privately, that those groups are going to have to be expanded.
But it goes beyond expanding them. It is really a huge educational activity as well. But if you are interesting -- I know we have talked with Bob about this and we have mentioned it to Lorin at the NADO meeting, NADO is going to be participating -- see me afterwards, because although this is as I said a small planning group, we are looking towards a much bigger effort. Dan Friedman of the National Committee will also be participating.
DR. IEZZONI: It is 4:35 or so. Are there any burning questions remaining in this room somewhere? Can we just talk as a committee for just a few seconds? This was a great day and a very good panel, and thank you, you have been extremely informative. Once again, thanks to Carolyn and Jason for putting together a great day.
For the committee, I don't for a minute want to hold us here any longer. But I think that we do have a couple of housekeeping things that we need to deal with. I think, Carolyn, at some point, maybe tomorrow, it would be good for the committee to just get a quick sense on what the subcontract is with Sara Rosenbaum and so on, so you know what work we are going to be paying for to have be done by outsiders.
I think that we need to talk through very quickly what kind of questions we want to be addressed by the speakers that we hear when we go to Phoenix, which is going to be in less than a month. We also have to remind people about the post-acute care meeting that our subcommittee and Barbara Starfield's subcommittee will be jointly sponsoring before the March full committee meeting, just get people up to date on that. I know a number of us are very on top of it, but I just want to make sure that others know how important it is that we make a very good presence in Baltimore at HCFA where this meeting will strategically be held.
Then we need to make a final decision about the date for the territories meeting. Hortensia isn't here, but I've been getting plaintive e-mails from her. This is something that we cannot let last. We have had a July date in our calendars for awhile, with some question about whether in fact that will be the date. We really need to make a decision about that and firm it up.
So I think that tomorrow, a reminder to people that tomorrow starts at 9 o'clock. We are talking about the OMB race and ethnicity standards and the age adjustment. We have a provider panel that continues into the afternoon at 1:30.
And Carolyn, you said that there was some question about whether there might be a third person on that panel?
DR. RIMES: It's possible. I just don't know yet. An individual from New Mexico, representing the American Indian population, who I lost track of in Bismarck, as did his secretary. So it is possible since he is going to be in the area that he may come.
DR. IEZZONI: Okay.
DR. RIMES: So we'll find out when he shows up, or not.
DR. IEZZONI: All right. Well, what we can maybe do is try to have that afternoon panel be an hour, rather than an hour and a half, if this third person does not show up, and then try to have the last half hour just be tying up some of these loose housekeeping ends. Does that sound okay with people?
Okay, thank you for a great day, Carolyn and Jason.
(The meeting adjourned at 4:40 p.m., to reconvene Tuesday, January 13, 1997 at 9:00 a.m.)