Conference room E-275C, Lowrise
John F. Kennedy
Federal Building
100 Cambridge Street
Boston, Massachusetts
Lisa Iezzoni, Chair
Mary Moien
Paul Newacheck
George Van
Amburg
Elizabeth Ward
Hortensia Amaro
Carolyn Rimes
Kathryn
Coltin
Vincent Mor
Dan Friedman
Marjorie Greenberg
Dale
Hitchcock
Dr. Richard Frank, Harvard Medical School
James Michel, Mass.
Mental Health Substance Abuse Carve Out
Dr. Barbara Dickey, Harvard
Medical School, McLean Hospital
Anthony Asciutto
Marjorie Porell
Mary Beth Fiske
TESTIMONY OF DR. CONSTANCE CHAN, DR. ANDREW BERGMAN AND DR. CAROL UPSHUR WERE NOT REPORTABLE; AND THEREFORE, NOT TRANSCRIBED.
MS. IEZZONI: I'd like to get started with the afternoon schedule because we have a very busy schedule, and Elizabeth has to leave to catch a plane to Seattle at about 2:30. So I'd like to have her here as much as possible. I'd like to welcome the afternoon. You have a list of us at the tables, so we're not going to introduce ourselves. Maybe you could each introduce yourselves as you begin your comments.
Since we have such a full afternoon,if you could limit your comments for about minutes so we could have a chance for some interaction before the next panel sits. So Dr. Frank, if you could get started for us.
This is being miked for the stenographer; and if you could talk directly into the microphone, it would be helpful to her.
SPEAKER FRANK: My name is Richard Frank, and I'm a professor of health economics at Harvard Medical School. I've been there for about four years. I was at John Hopkins before that for about ten. I'm an economist, and I do quite a bit of research on issues around management health and substance abuse care, but I also do a variety of other things related to the managed care industry including work on physician compensation arrangements.
Most of what I'm going to talk about today, I'm going to sort of wear two hats because one of the things that my research has gotten me involved with is working with states in helping them design payment systems for their Medicaid plans.
I've worked on several procurements following a little bit on the footsteps of Tom. So I'll talk a little bit about some of the data issues related to that, and then I'll end by making a couple of remarks about what I think is the big policy question in management Medicaid at least from an economist perspective.
Let me start by sort of taking, putting on that hat of the status buyer and monitor and sort of as an advisor to that function within a state. It seems to me there's sort of three main goals, particularly in the area of mental health and substance abuse services, but in general which is 1., getting a good price; 2., paying fairly; and 3, being able to monitor and hopefully assure quality. And those seem to be in access.
Those seem to be three of the things that are foremost on peoples' minds in a procurement process. Sort of basic building blocks kind of play a central role in kind of making sure that the organizations that are chosen to manage, particularly with the substance abuse services -- determining which organization to get chosen, there's some basic data building blocks.
To give you an idea, I don't know how many of you are familiar with the process, usually there's a data book that goes out to prospective bidders; and then bidders tend to learn a lot about the states along with a variety of other specifications, and they put in the bid.
And those building blocks involve information on utilization, access to services, major classes of services, the prices paid, past spending, the diagnostic groups using various services; and all this is done monthly and usually applied to a relevant denominator, and that denominator may be cut in a variety of ways.
Most simply perhaps AFDC and SSI in Medicaid, but other states, you know, a variety of states do things in other ways, classifying some groups as severely mentally ill, et cetera.
Those basic building blocks, one, would start off believing that it would sort of be readily available, and it turns out that that's not entirely true. And let me give you an example.
I was asked by the Auditor General in the state of Arizona about a year and a half ago, and I know Arizona is one of your case study states, to sort of review the bidding process and the procurement process for Merco Counties Behavioral Health and Human Services.
The first thing they did is they sent me all the information that was provided to bidders. I thought, ah-ha, a reasonable place to start is can I create an idea of what capitation rates might be so I have an idea of what I'm bidding on, okay.
And so what I did is I took all the data from the data books and put it into a spreadsheet just like all the bidders did and tried to sort of model how I would put together a bid, and I couldn't do it without making some really big assumptions. And they were not trivial assumptions.
They were assumptions that would swing things around 10, 20 percent, okay. And when you're in a very competitive market trying to bid, that can mean the difference, first of all, between winning and losing the bid; but more importantly that if you got it, winning or losing your shirt. It turns out that there are several consequences to that happening, and Arizona is actually a very interesting example.
One is kind of having mismatches in the data which is really the big problem that timing doesn't line up or service codes are incomplete, things like that. So you get sort of mismatches. It's hard to construct an overall capitation rate and then to decompose it again.
What that does is creates uncertainty for the bidder which results in higher bids because in general if you're uncertain, you're always going to go high.
The other thing it does is it causes bidders to withdraw. And in Arizona, we interviewed a variety of people who were interested, who had written in to get information and then wound up not bidding. And we repeatedly heard that one of the main reasons was there was so much uncertainty in the data that they were scared away.
The State of Massachusetts is another example where I served as an advisor on sort of designing the payment system, again, trying to fill in for Jom here; and there, the data was actually used on both sides.
The bidders all got a data book which was actually a lot better than the one in Arizona, but then the state staff put together a model to pretend they were a bidder so that they could judge the incoming bids against theirs.
Internally, we had an exercise where we essentially put together our own capitation bid talking to the folks, talking about what was reasonable programmatically to assess the incoming bids. It turns out there were enough holes in the data so that we had to make a lot of guesses and assumptions.
Well, the problem then is when you go through the evaluation process, what you wind up doing very often is not comparing what people say they're going to do in terms of cutting away from baseline on services.
What you do is you've got two things. You've got what assumptions they made about the missing baseline data and then what assumptions they made about how they were going to manage the care. That makes it tougher to find out who the good guys are and who the good guys aren't.
And so we were there knowing that in order to make our model work, we had to make assumptions figuring that they had to do the same things. So the whole process becomes more subjective, more uncertain and more difficult.
The second set of areas which probably Barbara is better at talking about is just monitoring things, and things are perhaps unnecessarily simple. We had very simple measures, and actually simple for the most part is good if it's meaningful.
The question is, Are we making the best use of the bid? I think we underused pharmacy data a lot. Since we're tight on time, let me just jump ahead to the issue that I think is sort of the fundamental policy issue at least in the mental health and substance abuse area for Medicaid which is, or one of the fundamental issues which is do you carve in or do you carve out? And that's sort of a raging debate.
Well, I'll give you my view on where this turns. I think this turns on whether you are more afraid of adverse selection and are not as confident as you might be in risk adjusters versus how nervous you are about coordination of care issues and potential sort of opportunistic behavior when new boundaries are set up. So you're trying to balance that.
And I think in order to kind of learn about this, you have to study pretty carefully who enrolls and why and in what plans?
What are the characteristics of plans that enroll certain types of people?
I was at a meeting recently with a medical director of a large not-for-profit HMO that operates in New England who sort of wistfully said that, you know, they went into the business to change the world and unfortunately had learned over time that the market will not reward you for being the best mental health plan in town. And if you wind up being the best mental health plan in town, the organization that you believe in may disappear.
And this was somebody who is not, you know, was speaking from reality, not from ideology, obviously. And I think that that really is a real danger, and it's happened in a lot of places; but it doesn't necessarily have to happen everywhere.
And so finding out what the facts are on what drives selection I think is very important, and finding out what the consequences of selection are are very important. And then on the other side, looking at whether integrated plans really do for the most part coordinate care better is the other thing.
Then with those data in hand, I think you can then sort of advise states or at least lay out the options for states clearly rather than under the current situation. It really is largely under hearsay. So there you have it.
One last note, there are some beautiful national experiments happening within Medicaid to get at this. For example, in the state of Maryland, a portion of people who have, who enroll I think it's within 30 days have a choice. If you wait for more than 30 days, you're assigned.
So you have this fabulous opportunity to find out what, what kind of distortion really results from selection, you know, something close to a real experiment. If we can make sure that we know, have enough good data on the past utilization and diagnostic, treating these people, I think we can make progress on this question which I think is really the basis.
MS. IEZZONI: Can I just ask, though, Richard, do we have enough information to be able to take advantage of this national experiment? You left that as a hanging question.
SPEAKER FRANK: Well, I think we can make a lot of progress. I think that if we can, for example, look at a continuously enrolled population which is not always easy to put together in the states of Medicaid, but principal you can. And you do a good job sort of looking at, you know, characterizing what the illness is that these people might have.
If you can use pharmacy and utilization data, diagnostic data; albeit creatively, I think you have the elements of taking a big step forward on this question. You know, it's a lot of work. I think with a modest investment on the part of states, you can go along way in answering that type of question.
MS. IEZZONI: Is anybody doing this that you described?
SPEAKER FRANK: There are people trying to do various things. They're certainly not doing it the ideal way that I was sort of talking about. I think that there are -- SAMSA has just funded evaluation of implementation of substance abuse managed care reform in several states.
MS. IEZZONI: We should find out about that.
SPEAKER FRANK: CSAP is the agency. Joan DeLeonardo is the person, the project officer.
MS. IEZZONI: Okay. Thank you. Okay. James Michel, please.
SPEAKER MICHEL: Hi, my name is actually James Michel. My friends and colleagues call me Jom, J-O-M. Those are my initials. My parents burdened me with that at birth.
It's a pleasure to be here flanked by two heavy-weight researchers with a Harvard affiliation. I'm not a researcher. I'm just a public policy maker, term consultant. I'm a consultant now, and my interest is in working with blic purchasers; smart, prudent purchasers and with provider organizations, actually, in the mental health and substance abuse field.
And I think I will speak more from the public purchaser perspective to you today. I was encouraged by the set of questions that you posed to us. In my opinion -- well, my brother and sisters, I believe, are in managed care. I don't know if I'm preaching to the converted or not.
I think the possibility through a single accountable entity having long-term responsibility for the overall health of a population makes sense. I think managed care done well means bringing to bear a prevention orientation, a quality management orientation.
I think it's realistic in it also brings to bear on the fact that we cannot afford the deluxe, you know, that we have to ration our resources which I think is one of the fundamentals for capitation financing as opposed to fee or finance capitation which tends to overserve the most healthy part of the population.
I'm very excited by the promise of managed care. Having served as a public purchaser, it was fun to be in the role of not saying to people who were providing services, no, we don't have enough for that but to be in the role of whipping the managed care organization and say show me that you're doing a quality job.
You know, to be in that role I think certainly is a relief to public policy managers. I think the trick or the task for public purchasers is to be very sophisticated and knowledgeable about performance management and to be very focused.
As I speak to people who are starting these programs in different parts of the country, one of the common mistakes that we made in Massachusetts in writing the first contract with our management behavioral health care organization, and we just repeated, is we tried to address the first 150 priorities. And what we discovered is you really don't focus on about the top 5 to 10.
So when we talk about performance measurement, I'm in favor of developing, of organizing information into meaningful and 16 manageable report card format. And you know, what we've, what has in fact happened is there is no single report card. There are a few models that are emerging.
I don't think there's consensus on those, and I'm referring now to the MHSIP, to PERMS which the managed care organizations, themselves, kind of came up with. I think HEDIS in its own way has paid some attention to mental health; although, it's certainly not relevant. It certainly isn't relevant to the Medicaid population.
But what's happened is each state sort of over time has developed through contract relationships with managed care specialty organizations. Managed behavioral health care companies have a report card or potential report card in the form of that contract. And again, my main message is let's try to keep it basic and simple at least to start out.
And the key questions are, Who's being served out of the eligible population? What did they get? And are there any meaningful ways in which we can know the correlation between what they get and how they do?
Outcome. I would say that in terms of the access issue, who's being served, I believe penetration is the one true measure. That means, you know, of the eligible population, what percentage gets services? I think you want to stratify that by whatever meaningful population characteristics you're able to collect.
The things we focused on in Massachusetts which are meaningful are disabled versus AFDC. To the extent that you can get other demographic information in a reliable fashion, that's worth looking at. I think you want to look at penetration rates also stratified by type of service. That's obviously very meaningful.
The whole point or top priority for us in Massachusetts was to move care in the Medicaid benefit from inpatient to ambulatory and diversionary services. When our program started, we calculated, this is back in '91, sort of at the base year prior to implementation that we were spending 62 percent of our Medicaid mental health substance abuse benefit on inpatient services.
The modeling that was done by the bidding organization that won the first contract basically flipped that ratio. And that made sense to us, and in fact, we were successful in reversing that.
So penetration by type of service is something you want to measure, and I think there, to the extent that there are other natural breaks like region, you know, I mean, to the extent that local systems emerge, you want to look at variation by region as well. But again, penetration rate I think is one major thing in management.
In terms of quality of care, I think our field is far from having consensus on what constitutes quality. I mean, we can look at sort of appropriateness. Are people being served in accoordance with their treatment plan is one way of looking, or you know, in accordance with regulations or in accordance with what we say is best practiced.
That's a much more -- on that meaningful management scale, that's much tougher on the manageable end to collect systematically over a long system. But you know, that's one possibility.
I think more importantly on the quality end though is where are we going with outcomes? The stuff that is manageable that we have managed to measure with some consistency in a number of different initiatives are really inner amounts of measures, and that would be readmission rates.
We seem to think that if there's rapid readmission, I think if there's rapid readmission, it's usually an indicator that something is wrong with the system. People should be able to be discharged from a hospital and return to their baseline functioning.
I think other sort of similar measures that are more processed measures and are not true outcomes but which we can grab onto as having some significance for the functioning of a plan are the lag time between hospital discharge and first outpatient session.
In our first, in our administration in the Massachusetts program when we first started to measure that, we were amazed to see that the average time between hospital discharge and first outpatient claim was in the 15-, 16-day area which was always ridiculously long particularly given that we were shortening lengths of stay and discharging people probably a little bit more.
That's another interim outcome measure. I think the stuff for me that has more promise for the future probably has to do with unity tenure. You know, that's a manageable and meaningful measure particularly for the seriously and persistently mentally ill population.
I think functioning in terms of employment is something that I would like, you know, I believe there's a consensus on. I believe we can make it operational. I believe it's manageable. I think we just need to come to a more agreement, perhaps have better leadership in terms of agreeing that that's part of what you measure for from all over populations.
And for kids, it's school functioning. Are kids going to school, and are they passing? You know, if we could sort of agree that those simple measures were kind of the outcomes that should be met, should be collected on a national basis, I think the field would be in a lot better shape.
Going back to the meaningful manageable. In my experience, we collect data in a couple of different ways. The most reliable is claims based. We get that information because people want to get paid for their services.
We can kind of grumble about the different MMIS systems in the Medicaid world; but in fact, when providers who make claims at this point in time, you know, the HCFA 1500 whether it's in Massachusetts or California, it's the same form. It's collecting basically the same information.
I'm a great proponent of using claims-based information primarily because we're already doing it. When we start talking about other kinds of important, meaningful information and we're going to start relying on surveys, you know, generating survey-based information and collecting that, doing chart reviews, you're now talking about additional cost, additional administrative resources, and I think that has to be carefully evaluated.
I think resources are tight. If you're going to, if you're going to require or try and collect non- claims based information, we pick stuff that you know you can do well, that you know you can do over time and where there's some consensus on the part of all the moving parts of a large system; i.e., the provider, the client, the advocate, managed care organization and the policy major purchaser that this is information worth collecting.
I guess one sort of other issue that I want to make a couple of comments on is what are some of the barriers or impediments? You raised that question to doing this work. And I think one is that just on the purchaser side of things, we really have multiple purchasers or multiple funders of the mental health and substance abuse system.
Medicaid which has -- because it really is an insurance organization -- led the way into managed care is really only a minority funder in most states of the mental health system and the substance abuse system and the child welfare system and the juvenile justice system and the MRDD systems.
Whereas they had as little as five to ten years ago been an indemnity insurance organization and merely a payor, they're now at the policy table with the Mental Health Authority or with the, you know, the addictions director or with the child welfare director making policy. But you know, those other agencies are funding non-Medicaid services. And if we want to really understand way back to those basic questions who gets served, what services do they get and what are the outcomes, we have to look at the combination of what services do they get from Medicaid? What services do they get from the Department of Mental Health?
I'm segueing beautifully for Barbara here to truly understand kind of how those inputs, what the inputs are, you know, who's being served and what they're getting much less how they're doing, whether there's a correlation between what they get and what their outcomes are.
So you know, in this era of Medicaid managed care, I think a big issue is what's the status of interagency relationships between Medicaid and the sister agencies who are co-funding the systems for these vulnerable populations?
MS. IEZZONI: Thank you. Barbara Dickey, please.
SPEAKER DICKEY: Thank you. Yes, that was perfect, I guess. I'm Barbara Dickey. I'm an associate professor of psychology in the Department of Psychiatry at Harvard Medical School, and I work at McLean Hospital.
And I'm going to talk about the research, the data aspect of the research that I've been doing on Medicaid managed care in 4 Massachusetts, but I'm also going to talk a little bit about sort of from the provider view; and that is, How does McLean Hospital respond to requests for data? So I'll sort of try to weave that in as I go along.
I certainly think, and maybe you've heard this many times already today, is that the big issue is quality of care. And certainly from the work that I've done so far using paid claims data, we can see that particularly with the seriously mentally ill, and this is the population that I've spent most of my time studying, that under Medicaid managed care, the total number of admissions dropped.
The total number of people admitted dropped. The length of stay dropped. Almost no matter how you look at it, hospitalizations clearly went down under managed care even though there was a pretty sizeable increase in the number of enrollees starting in 1993.
And when I say total, I mean not only admissions to general hospitals that are reimbursed by Medicaid, but also admissions to the state hospital paid for by the Department of Mental Health.
And as Jom said, we felt early on that with this particularly disabled population, it was important to understand what the effect was by examining not just what happened in Medicaid, but what happened with the Department of Mental Health because there was some expectation that if you pushed down Medicaid, you'd pop up the Department of Mental Health. But we found no evidence that that occurred.
This shift from inpatient care to more community-based care is certainly consistent with the mental health policy, sort of national level policy; but I think many advocates have been very concerned.
And I have to agree that there's nothing in my data that tells you one way or the other whether or not somebody was denied admission, and then we don't know what kind of care or what appropriate care they might have gotten in place of having an inpatient admission.
So I appreciate the concerns about quality, but I think they are very difficult to answer using just paid claims data. We can describe a lot, and I agree with Richard, we've really not done enough with the pharmacy data. But we can't go all the way. And I think if we want to study quality of care, we really need to have access to and use more clinically-driven data.
There's another I think big issue looming out there. I'll talk a little bit more about quality in a minute, but I won't take much time. But I think there are a lot of issues around the provider community, and maybe some of these are sort of, I don't know if labor market is the word, but related to that.
Also, we know almost nothing about what happens when we use performance indicators. In fact, when we monitor care using performance indicators, what's the evidence that care actually improves, and how do we know?
We now have an increasing number of practice guidelines. Practice guidelines are intended to improve the quality of care; and again, I don't think we really know how practice changes when practice guidelines are implemented and if, in fact, when those practices are changed, do we have better outcomes? So I think there's a whole series of probably studies that could be done in that area.
And again, I think we need clinical data to do some of that. I come back to that because the whole issue of confidentiality around data whether it's paid claims data or clinical data is a big issue in this state. And I think particularly in the last year or so, I've seen a shift in the climate of access to data. And I'm not sure whether that will just pass or whether this is going to be a continuing issue. But it certainly gives rise to my thinking about what are the ways in which data can be made both simultaneously more secure to kind of buttress these concerns about confidentiality; and at the same time as a researcher, how can I continue to do the kind of work that I want to do and be able to do the things I want to do if there's greater constraints around data?
And certainly in the area of computerizing medical records or computerizing clinical data, it seems to me there is -- I know that's incredibly expensive, but it's also an avenue which would lead to providing access to researchers like myself. If you had various levels of sort of passwords or something, that would allow me to look at a medical record without knowing whose medical record it is but seeing what actually happened to someone. Right now, that's almost impossible.
We've used Medicaid paid claims as I said, and we merged those data with data from the Department of Mental Health. We do this under an inner-agency agreement which has allowed us to do this work, but it does mean we have to have some way at the personal level to merge the data.
Again, the issues of confidentiality arise; and I think different agencies in this state, and I imagine in other states, don't use the same identifying numbers, don't use a unique numbering system. So there's a lot of trying to put all this together sometimes clumsily, and I should say just for the record that all the work that I do I do only after it's been cleared by the DMH/IRB, by counsel at Medicaid, by Harvard Medical School and in the current study, quite a few other people as well. And we take all the current measures to ensure confidentiality of the data we're working with.
We're now funded by NIMH to study quality of care for this population; that is, for people who are seriously mentally ill and in the managed, the MassHealth managed care plan. And we're collecting primary data; and again, collecting primary data from beneficiaries can only be done after certain safeguards around confidentiality are met.
And I think this has really been, I don't want to say an impediment because that sort of denies the importance of confidentiality, but has really made our job difficult. And what happens when you invite people to participate in our protocol, they have to have had a recent contact with a screening, an emergency screening team because we want to enroll people who've had an acute episode.
Managed care addresses acute care, not long-term care. So we want to focus on acute care. But as researchers, we can't go directly to people ourselves, enrollees. We have to go through providers, and that has, means that in the end we won't have as many people enrolled in our study as I would like, and I think it's going to probably be a biased sample.
Although, that's going to be very hard for me to figure out. But we will be collecting outcome data and clinical data from these enrollees, and we will be reviewing, with their consent of course, we will be reviewing their medical records.
We will be assessing those records by using practice guidelines that have already been published for schizophrenia and looking to see to what extent does the record match the practice guidelines.
Well, I talked a little bit about how accessible these data are. Let me just say a few words about performance standards and performance indicators and how they look from the provider end.
At McLean Hospital, we've developed a set of performance indicators that we use internally as part of our quality, the continuous quality of improvement program. And one of the questions you had is, well, what does this cost a provider?
I can tell you this, as a hospital with I think about 3,500 admissions a year, and it also has an outpatient department and a partial hospitalization program, collecting outcome data and processed data costs in staff about $90,000 a year plus computers. And it requires that nursing staff and attendees actually collect the data.
The 90,000 is at the, kind of data, analytic and reporting end, well, you know, overall supervision of the program. I think as -- and this is a system that's been accredited by, that's been approved by JCHAO and as part of their whole initiative to get hospitals to collect outcome data.
So I think as this stuff becomes more standardized and as more people are going to do it and if somehow those data are made available, then we will have a better kind of looking at outcomes as well as process data across hospitals let's say within the MassHealth network and perhaps outside of it.
So let me finish by saying I think in terms of what we need and a couple of things about data for the seriously mentally ill. The first is as we're trying to do some comparisons in what I would call an observational study, we've not randomly assigned people as we would in a clinical trial.
We need to do some kind of risk adjustment, that this is an area that's been woefully underdeveloped in mental health, and we have a long way to go. And we need to develop more sophisticated statistical techniques in order to carry this out. It makes a big difference because there's some pretty clear selection bias.
The second is collecting primary data from the seriously mentally ill. I think people have had some concern that perhaps such data might be, self-report data might, the reliability and the validity might be questionable.
And a few years ago, we undertook a study in order to satisfy ourselves that, in fact, asking people to self-report their health status and their psychiatric problems even though they were very distressed and perhaps a little psychotic, it worked. And to our satisfaction, it worked very well. And I personally feel fine about going ahead and doing that.
So I guess as far as recommendations, I think I've tried to say those as I went along. I certainly think we need to find a way to encrypt I.D. numbers so we can share data across agencies within the state without concerns of confidentiality.
We need much better computerized medical data including computerized medical records. I think if I have one last thing it would be to think carefully about carving out the mental health care of people who are seriously mentally ill because of the medical problems that are far in excess of the medical problems of other Medicaid beneficiaries, not just the mental health.
That's a question in my mind about whether they're getting adequate medical care given the level of their medical disorders.
MS. IEZZONI: Thank you, Dr. Dickey. I turn to Hortensia. You're going to lead off with the comments, so why don't you take off. Thank you for your comments. The area, the issue of behavioral health care and how we integrate it in our recommendations is really important to us.
One of the issues that I've brought up before in the Committee which you touched on a couple of is how really the issue of substance abuse services, to some extent, but maybe a little less so mental health services is really distinctly different from other health care in that there is at least sort of one other parallel system through which I would probably think most people really receive, or perhaps it's -- I'm not sure whether most of the substance abuse treatment is provided through there.
That's true the public health system through the state block grant dollars, it's really come down. It's really outside of the system that provides generally medical care at least through the outside of the Medicaid managed care system.
So you know, I guess that's also the case in some other places I can think like vaccinations, you know, where the public health system might be picking up some things.
And my question is, What's the implication of this then for assessing substance abuse within managed care? You know, what's being delivered? Outcomes for -- so for example, if we're trying to look at what data systems are needed in order to track services and the impact of the services and outcomes, how do we think about perhaps really outcomes that may be attributable to services not funded under managed care but provided through the public health system?
Any thoughts on that? I mean, you referred some to the need to link these things, information throughout these systems.
SPEAKER MICHEL: I don't think, I think you're correct in that in certainly in Massachusetts and again in most states, there is a separate provider network that focuses on addictions treatment.
Actually, in the olden days, there used to be sort of an alcoholism treatment system and a drug, a heroin and you know, hard drug abuse treatment system. Those have come together into the substance abuse treatment services system. It is separate and distinct.
There's a big national kind of argument in the "behavioral health" field as to how integrated or not those should be. We do know, and Barbara can corroborate this, at this point roughly 60 percent of folks who are diagnosed with serious, persistent mental illness also have substance abuse culpability.
Actually, I firmly believe that most of the folks who are in the substance abuse treatment system, there's a huge instance of trial run and a lot of mental health needs. So we do have separate systems, and to the extent that Medicaid is a co-funder of those two separate systems and we want to know kind of what the outcomes are, yes, I'm reiterating, I think we need to try to find ways.
And whether it's as concretely as Barbara suggested by doing a better job of identifying who the clients are, assigning unique numbers, finding ways not to violate confidentiality but to do population level statistical analyses, we have to find a solution to that.
Part of the services are being delivered by the public health agency, block grant, state general fund dollars, and part of it is being funded, you know, some states have gone further in medicating, as New York says, the verb to Medicaid a service, but -- or do we have any sense to what proportion of what substance abuse services, what proportion come from those systems, which each of those systems?
MS. IEZZONI: You mean Medicaid versus the -- or state, not grant funded? I know what it is in Massachusetts. The substance abuse budget here is in the 40 to $50 million range, and about half of that is what was spent on substance abuse services through the Medicaid managed care initially.
So you know -- or I mean, if we looked at substance abuse within Medicaid managed care out of the context of that and these same folks are also accessing that service, some of the outcomes that we track may be misattributed to Medicaid managed care?
SPEAKER MICHEL: I think that's true of all these populations, very few people are pure Medicaid clients. They only get Medicaid-funded services. You know, the seriously, persistently mentally ill, the child welfare populations, the chronic substance abusing populations are all folk who are funded by multiple payors.
MS. IEZZONI: Barbara, I see you nodding your head. Do you have anything to add to that?
SPEAKER DICKEY: Well, I know that we're always surprised when we see how few people receive, few people who are seriously mentally ill who receive treatment for substance abuse. The proportion is in the neighborhood of maybe 10 percent of them actually are treated, and maybe another 4 or 5 percent are identified as, with a secondary diagnosis as having substance abuse.
But that's one database I've never tried to merge with ours, so I have no idea, you know, if they're getting treatments someplace else.
MR. NEWACHECK: Do either of you, are either of you aware of any states that have made progress in terms of linking together various data of services in the mental health area?
SPEAKER FRANK: Well, yeah, there are a number. Arizona for all its foibles, each of their carve-out agencies have responsibility for all of it. So they've had to do some of that.
Massachusetts has had to do some of that on the mental health side for everything that's not so- called continuing care because in the new contract, if blood is given and the contracts are different for technical reasons, but the management and data systems are the same for those folks. And so, and there are some other states that are doing it.
Those are the two I'm most familiar with. They're places to learn from.
SPEAKER MICHEL: Iowa is about to let an IFB.
MS. IEZZONI: Hortensia.
MS. AMARO: I had a follow-up question. I guess from many different perspective interests, there's an interest in looking at cost-effectiveness.
I think in substance abuse and mental health for many reasons I guess some people because of the sense that actually this could end up cutting costs in the health care end of things, do you know of any efforts, or are there any efforts in our state to link substance abuse and mental health services costs and outcomes and also to medical outcomes in savings say on the hospitalization end, not for substance abuse hospitalization but other types of health outcomes? You know, linking it to that other pocket that's --
SPEAKER MICHEL: You know, I go to a fair amount of conferences. A lot of people talk about medical offset. It makes intuitive sense to me. I haven't seen, I haven't seen the studies.
SPEAKER DICKEY: I think it's been very difficult to show.
SPEAKER FRANK: Well, there's two theories about why that is. One is that it's the Holy Grail, and the other is that it isn't and it is there. And either way, I think there are lot of people out there trying to study it, and I think there's some hope that if it's out there, it will be found because I think methods and creativity are improving. But the evidence there is not very persuasive today.
SPEAKER DICKEY: I would say there's pretty persuasive evidence at least from my data, that I'm persuaded from my data that people with substance abuse and a mental illness have far higher psychiatric costs. They have much higher medical costs, expenditures and in almost every way just are clearly different than the people who we do not identify as substance abusers.
SPEAKER FRANK: I think that's been well documented. I think Barbara's done a great job documenting that, but I think that's very different from saying there's an offset in looking at the implications of treatment and good outcomes in mental health on other medical costs.
SPEAKER DICKEY: We understand.
MS. IEZZONI: Kathy, you had a comment?
MS. COLTIN: I think two of you at least mentioned readmission rates as in quality measure, and I don't know if you're aware that both the mental health readmission rate and the substance abuse rate and readmission rate have been proposed to be deleted from this measurement sector?
SPEAKER DICKEY: Did I hear you say deleted? Yes?
MS. COLTIN: So the health plans would no longer record those two measures. That actually is going out for public comment today. That will be on the web site, so you may want to comment on that.
The rationale for that actually is that it is such a small percentage of a health plans' members who receive inpatient mental health or substance abuse treatment; and therefore, the denominators for these measures are very small in many of the health plans.
And therefore, their usefulness for comparative purposes is really pretty low. So it's not that it isn't useful internally as a quality improvement measure within a given plan, and certainly some of the larger plans the rates are more valid and more useful; but that is the proposal. Given that both of you seem to find them useful, you may want to comment on that.
SPEAKER MICHEL: There's a difference between the populations that the plans, you know, that mainstream plans serve and the Medicaid population. The Medicaid population --
MS. COLTIN: It kind of depends on the size of your Medicaid population.
MS. IEZZONI: Barbara, you have a comment?
SPEAKER DICKEY: I was going to say, I feel fine about it being dropped because I think even though internally up at McLean Hospital we might well want to look at that to be sure that our discharge planning is in place and so forth, at a population level, think about this.
If you are doing a good job maintaining people in the community and preventing hospitalizations, the few people that do get hospitalized are likely to be much sicker. And therefore, one might expect that readmission rates would go up the more successful you are at maintaining most people in the community. I think that's one of those measures that it's not entirely clear what it means.
MS. IEZZONI: Thank you, Elizabeth, for being here for the last few days. Anybody on this side of the table have comments?
SPEAKER FRANK: I have one last suggestion if you would. I think one thing we hadn't touched on, I thought Jom was going to raise it. Since he didn't, I will, which is I think that states are in a strong bargaining position with managed care companies at least for awhile longer, so therefore they can ask.
I think one of the things that is an opportunity to sort of get at least half a loaf on a large scale of what we usually get, what Barbara was proposing is to ask for UR data and use screens because you get a lot more clinical data from that. It's clinical data that all of these organizations have a stake.
MS. IEZZONI: What would be the confidentiality implications of that?
SPEAKER FRANK: In a way, I was thinking about that actually a lot. It's not a whole lot different in some ways than any other claim because it doesn't have the kinds of detailed things that a chart has; but it does, for example, have a presenting complaint.
It has some information on history that it in a way mimics some of the things that are likeable, on a claim only more so. So it's probably halfway in between sort of a chart and a claim in terms of confidentiality.
I'm not trying to say it's the same as a claim; but I think it's one less, it's quite a bit less intrusive than going into a chart or talking to a patient. And for that reason, I thought that we're sort of -- and the information systems are now at a point where you could do that.
I just, I have a little project where we're sort of shepherding a project like this through the IAB project. I think it's largely, it was more, it was handled more like a claim than I thought it would be.
MS. IEZZONI: This morning, we heard from somebody, I can't remember who it was exactly, who said that focusing on the chronically, seriously mentally ill is a good thing; but it's really missing a larger body of people who have mental health needs and that we're really missing a large number of people who could benefit significantly from mental health, from mental health interventions.
Do any of you have any comments on that and how we might think about quality of care for those people who may not be in the mental health system but could potentially benefit from it?
SPEAKER FRANK: There's two issues that I think you've raised. One is how do we know about people who aren't in treatment because we don't see them in any of our usual data systems?
And the other ones are how do we know about people who are in treatment but not for schizophrenia, bipolar, well, serious forms of serious depression according to Samson, whatever that is.
And I think that, that was sort of what I had in mind. That was some of the use of the pharmacy data, at least getting people in treatment because the MCQA is in part going that route on depression.
I think there are sort of similar things you can do with markers of care for alcohol treatment, markers of care for anxiety disorders, not just psychiatry, but other markers and other serious forms like that where you can get at least little windows into the process and care. And that's sort of along the line of the standards of care. In terms of the people who aren't treated --
MS. IEZZONI: You use the disability model. You know, people are talking about functional assessments as people enroll in health plans. Well, there are also assessment tools for assessing the mental health services.
SPEAKER FRANK: Well, there is for substance abuse that I'd be more willing to bet on. The screening in mental health except for depression is pretty limited.
MS. IEZZONI: I was thinking of depression, but that's probably the biggest body of folks who - -
SPEAKER FRANK: It's about half.
MS. IEZZONI: Yes, who aren't being treated who would potentially benefit from it.
This is kind of a very open question. Do you think that some sort of screening of people the way that plans are thinking of screening for functional impairment should also address mental health issues?
SPEAKER DICKEY: I think there's been a lot of effort for primary care doctors to screen for depression. When the studies come forth, it's kind of we have to examine that.
Everybody moans and groans, the primary care docs moan and groan because we have seven minutes, and we have to screen for 472 disorders as well as take care of the person's primary complaint.
MS. IEZZONI: This wouldn't be the primary care doctor. It would be the plan enrolling people which is different and raises a whole other set of issues, quite frankly, that some of us might be concerned about. I just wanted to hear what your opinions might be.
SPEAKER MICHEL: Everybody does the FF36 upon enrollment.
MS. IEZZONI: I wouldn't recommend that.
SPEAKER FRANK: The Maryland Health Choices program is doing that for substance abuse.
MS. IEZZONI: Oh, interesting. Any other programs that you know around the country that are doing something similar?
SPEAKER MICHEL: Just one comment. You know, in Massachusetts right now, there are probably about 65,000 people this month who are going to be seen for mental health services in the Medicaid managed care plan of which less than 5 or about 5 percent are classified as seriously mentally ill.
So you're talking about a lot of poor women and their kids who have mental health needs. I mean, in my experience, people don't lightly ask for mental health services. There's still that stigma attached to that. So those people really need services. What you're suggesting is that there's a huge --
MS. IEZZONI: I'm not suggesting. I want to hear your opinion.
SPEAKER MICHEL: Well, the thought that there is -- there may be, I mean, I don't know. I am biased that this state does have a rich benefit and does serve people; and in other parts of the country there's standard funding, and it's not really an issue. I don't think there are a whole lot of people in the woodwork who need services who aren't getting it.
MS. IEZZONI: Oh, that's interesting. Do others share that same feeling about Massachusetts?
SPEAKER FRANK: It depends on what you mean by need. But I would say that do I believe that the difference between the treatment prevalence and the overall prevalence represents on that theme? No. Do I believe that we have all the needs met? No. So it's somewhere in between.
MS. IEZZONI: Okay. Well, thanks for bearing with my probing questions. Are there any other comments, anybody? Why don't we thank the panel then for giving us your insight into this issue; and why don't we take a ten-minute break and reconvene at ten to 3:00 and just keep on going for the rest of the afternoon.
(Short recess taken.)
MS. IEZZONI: I think we'll get started because we actually have a full remainder of the afternoon even though it may be sparse. There are four more people that are going to be sitting where you're sitting at about four o'clock. So we should get started.
Hopefully you've had a chance to pick up a leaflet at the front of the room so you know who we are, and you got our lengthy letter with our lengthy list of questions. I'm not sure exactly how you're planning to orchestrate your comments among the three of you.
Why don't I just let you do it the way that you'd like, but could each of you introduce yourself and briefly say what your position is as you start your presentations?
SPEAKER PORELL: I will be beginning the presentation for the staff who's here to represent DMA. My name is Marjorie Porell, and I'm a senior research associate at the Systems Department at the Division of Medical Assistance.
MS. IEZZONI: And can I just ask people to speak deeply into the microphone because the transcriptionist is having trouble hearing some of this.
SPEAKER ASCIUTTO: My name is Anthony Asciutto. I'm Director of Quality Management for the PPC plan at the Division of Medical Assistance.
SPEAKER FISKE: I'll be going third. My name is Mary Beth Fiske. I'm a director of the HMO program for MassHealth.
MS. IEZZONI: Thank you.
SPEAKER PORELL: Well, after looking at the objectives for the overall content of testimony, I'm here to provide a focus from the DMA's systems department, in particularly the informational analysis unit.
We are the body at DMA which is responsible for facilitating the data gathering and the development of appropriate mediums to allow for the tracking of effects of DMA's managed care programs.
Issues may arise as part of this process or as a result of the expansion of managed care; and from a data facilitator's point of view, we see problems arising in five areas which I think are just simple bullets on your handout.
They include data sources being used; collection approaches, especially those for data sources that require data collection by DMA or DMA contractors; data elements and assessment methods; resource constraints and the most important and usually not talked about is intra-agency cooperation, intra, inner-agency cooperation and cooperation between DMA and the MCO's.
On types of data most of you are quite familiar with, I don't mean to be redundant, but we know there are many types of data that alone or pooled could be used to look at the effects of managed care including public health vital stat data files, death and birth certificate data, U.S. zip code consensus data, DMA claims and contractor claims, who we subcontract with, DMA eligibility files, institution or provider medical records, hospital discharge abstract data, surveys, compliance or other administrative records which we keep at DMA related to our MCO contractors and finally the soon-to-be-developed MCO encounter data which DMA is in the process of developing with the assistance of med staff.
I'm presuming they've testified already in some of the more detailed issues.
MS. IEZZONI: Brian spoke yesterday.
SPEAKER PORELL: The biggest issue we find is there tends to be a reliance on few data sources. We have requested that that come up to our shop which typically rely solely on claims. And our concerns is that if we have to move ahead that we would hope HCFA would be open to or push that studies engage on reliance in multiple and supporting data sources and mechanisms to effectively pool data sources to enhance the depth of what is known about enrollees.
Zip code is a common field across many of the sources that I've just listed with a definition of logical service areas; and the use, for example, of zip code level census data, eligibility data can be supplemented when looking at the effects of managed care on important racial and ethnic population groups.
The second area that we usually see bigger issues is the collection approaches. The issue we tend to grapple with is we'd like to promote uniformity and not standardization.
Collecting information on quantity, costs, satisfaction, outcome and the types of care utilized by managed care providers in the treatment of an enrollee's condition needs a uniform process. Depending on the size of the recipient population, the process can be quite extensive.
Even the current data off the traditional claims source which has a fairly uniform and efficient process for MassHealth PPC plan enrollees has many obstacles. The magnitude of the effort can be represented by the shear number of claims, approximately 34 million, that are processed for all of DMA enrollees in one fiscal year.
As more enrollees move into MCO plans, the development of substitute data sources should focus on uniformity and efficiency and not just standardization.
The third area we tend to see issues is on data elements, the issue in particularly is the instruments used for data gathering. They need to be utilized standardized fields of information. Often times what we see is arbitrary data elements being picked, data elements that focus on minimizing data gathering or requirements to select everything and anything on a HCFA 1500 or UB92 or other claims forms.
We think it's important that to dictate the selection of data elements should be a comprehensive set of indicators to study the effects of managed care. That should be the driving force for developing these results and providing some logic to them.
Fourth is data quality protocols and processes to ensure quality on the data that's being gathered. The issue simply is data quality. For some data sources, the issue of quality controls may not be that pressing; but others, for example encounter data from MCO's, formal processes to ensure quality data may be needed.
For many MCO's, the primary ambulatory patient care data source is an encounter data system which is usually part of an automated medical record, but this is not always the case depending on the extent of contracted provider services and the method of payment.
For contractual patient care, the primary data source could be claims, another encounter database or possibly a hard copy medical record. Within even one MCO, there could be many MCO's all with varying systems for recording and paying patient care.
The point I'd like to emphasize here is for such data gathering efforts simple but effective protocols, for example, calculation of standard utilization indicators with protocols to question very low or very high levels, are needed to ensure the quality of data that is submitted by managed care organizations. That's what I mean by MCO's. I'm sure you've got this algorithm down.
Resources, the issue is they are very limited, frankly. At the systems department, DMA, we take requests from academics, from public sources under the Freedom of Information Act, the Boston Globe. We have very many competing interests.
In order to monitor managed care, insure data integrity and efficiency in data gathering, storage and accessibility, means that resources must be available and commensurate to the degree of data collection and gathering efforts.
We just want some appreciation. If it's very broad or extensive without some logic provided, it can be quite overwhelming, not just for Medicaid. We tend to be a little more advanced than some other states, but some other states may not be as rigorous in the systems area as we are.
Finally, cooperation which is often times ignored. As much as people should rely on multiple sources even within DMA, we tend to rely only on one source. We have made extensive liaisons with the Department of Public Health for birth certificate data.
There are obstacles, agency obstacles, legal obstacles which I'm not sure if you have any solutions, but it would be highly appreciated if you could assist. And cooperation between DMA and MCO's over the years has been improved, particularly in our data gathering efforts.
And Mr. Asciutto should be given some credit for that that it's been quite helpful if we have an opening dialogue with MCO's. That's my points that I wanted to raise. I hope you consider some of them.
SPEAKER ASCIUTTO: We thought it would be a good idea to give an overview of the ways in which we rely on data at the agency to help us manage our programs, and I think a lot of what we'll talk about in terms of specific activities is highlight some of the issues that Marjorie raised.
At the end, Mary Beth will kind of wrap up with summarizing some of the points in terms of activities and approaches and things that we hope potentially could be paid attention to. It would be good to start off with a lay of the land.
In Massachusetts and in Medicaid, we have essentially three different managed care programs for our population, and we have the primary clinician plan which is the primary care case management program. It's about 75 percent of our population.
We have about 160,000 in our MCO program or HMO program. Those are our capitated plans of which there are, I always forget the number.
SPEAKER FISKE: It varies depending on when you count the mergers. So somewhere between eight and eleven.
SPEAKER ASCIUTTO: And then we have the behavioral health program which is our capitated program for behavioral health services.
Most of those members are PCC plan members. There's a smaller portion of members not yet enrolled in a medical managed care program that receive services.
We have a number of data activities, some of which are common to members across those three programs and some of which are unique to help manage their particular program, itself. I'm going to start off with some of the activities that are the data pieces that are common to the programs.
We have internally developed a benefit plan report which provides program managers with enrollment data and budget information that goes out on a quarterly basis. We have member satisfaction surveys -- I'll go into a little bit of detail in just a moment -- predominantly our internal general member satisfaction survey and a pilot of the CAHPS instrument.
MS. IEZZONI: Bob, can I just interrupt there? We asked Brian yesterday if we could get an open copy of the specs that this data has created or your encounter data system, and he said we should ask you. So if you could just consider that and get back to us later about whether it would be possible to get a copy of that as a templet for what Massachusetts is using.
SPEAKER ASCIUTTO: I think we can speak about that. I'm going to say some things about the encounter data.
MS. IEZZONI: Okay. If we could get a copy of that.
SPEAKER ASCIUTTO: In terms of the member satisfaction surveys, we have an annual member satisfaction survey that we support. The long and the short of it is in the past, this survey was, the methodology was managed individually by MCO's and the plan, and that led to some issues regarding the usefulness of the resolves given that different survey protocols would have been followed.
One of the things that we've done with the recent effort was, correct that, we have one vendor. We have one instrument, and we have one data collection methodology. We're working with the Center for Survey Research at UMass to manage that piece.
That's a useful project for us in that it's our main vehicle for getting comparative data from health plans on member satisfaction. One of the other activities that we're highlighting right now is the CAHPS survey on our PCC plan population.
Because we've had our general survey in place for at least two years, we felt that we wanted to test the CAHPS instrument and see how it behaves so to speak with our population so we can make an informed decision as to which activity best suits our needs in moving forward.
We did limit ourselves to the plan membership. We're going to be doing a couple of interesting things with that activity. We're surveying both adult and pediatric samples.
A sample of those individuals will also be getting the special needs supplement that accompanies the general CAHPS survey. That's going to allow us to do a couple of things such as comparing what we believe to be categories of assistance as found in our eligibility files with what persons self-report, and special needs are chronic conditions.
We're also going to be able to delve into a little more detail about experiences that persons with disabilities have within the plan.
We are doing some testing with respect to survey methodology. This supplement, itself, almost doubles the length of the survey.
We want to see what effect that might have on response rates. We're also doing something, a couple of tests with respect to distribution protocols in order to see what some things might have in terms of effect on response rates.
Can we increase response rates with extra steps for follow-up with attempts to get back at persons a third time perhaps? It's also going to give us a unique opportunity, as I stated earlier, to compare results within the CAHPS survey to the general member satisfaction survey, again, help us make informed decisions about, as an agency, how to support member satisfaction and issues moving forward in terms of survey distribution, content within a survey.
HEDIS is something that the Division currently is in its fourth year of collecting HEDIS information consistently across not only the PCC plan and HMO's. We've tended to focus our HEDIS activity on DMA's agency goals and data, selecting measures that support our notion of where we need to be going in terms of an agency.
One thing that we found particularly is especially since the set of HEDIS measures is intensive is adopting a rotation of measures. This allows us to focus on a small set of measures in a given year and also allows us to develop improvement activity and using HEDIS as a vehicle to measure success over time briefly.
About deviations from HEDIS specifications. I say briefly, because we could go on in this section, I just want to highlight a couple of examples. Sometimes it makes sense for the agency to deviate from a spec as found in HEDIS, and sometimes it makes less sense. There are certain trade-offs especially if our interest is in terms of benchmarking.
Continuous enrollment is one area where in the past we've deviated. We've used shorter continuous enrollment periods than those found in the technical specs of HEDIS.
We've done a brief study in the past and found that when we pull rates on a twelve-month versus a six-month continuous enrollment, we get very similar rates. We found it's not as useful for us to have to deviate from that kind of a spec.
For example, where it's important for us to deviate from specs is relevant to the child immunization program. We deviated from that this year because if we did not, we would have been in conflict with the DPH immunization schedule.
When we do use HEDIS, again, as in the satisfaction surveys, our focus is really on using that information to compare plans in terms of performance to one another. Most importantly, to identify potential opportunities for improvement to identify potential best practices.
Benchmarks is actually an interesting thing to bring up. In HEDIS, we found it very difficult to find benchmarks appropriate to our population for many reasons. So that's one area as we get continued improvement, we're often relying on national rates for the commercial populations.
Encounter data, the data that we're collecting we're striving to be consistent in terms of specifications across the plan, itself.
The HMO's including the HMO program and the information will also include encounters from the behavioral health program.
This is our first round of data collection and database development, and our focus has really been on the reliability and the validity of the data that we collect, building procedures as Brian talked about yesterday in terms of testing and the quality checks and informing plans about things that they can do to improve.
Our first round will be completed by June, the end of June which is when I think we will have a final set of specs that we feel most comfortable with when we have our final set of submissions to do final quality checks. The focus after June will be on the production of our minimum data set and a report of quality indicators.
One thing I can mention here is that another thing we've done in HEDIS recently is reduce the set of measures we focused on in HEDIS knowing that we were building an encounter system over on this side so that those measures that we thought we could gain or collect from the administrative data collected in the encounter system we could move off into that project. It's for easing the burden of the plans as they report to us on a regular basis.
Also, the encounter data will be a potential source of case mix adjustment as we feel more comfortable that the system is built and meets our expectations on reliability and validity.
Briefly, on some PCC plan specific activities, the ones I'm going to speak to today deal directly with the information available to us through our claims system and how we make that information useful for us. Most of the activities are driven by our initiatives tied to quality improvement.
We have provider profiling activities we call the PCC profiling. The thrust of that activity is quality improvement, and we develop a, have developed a series of reports to support quality improvement activities both to measure and monitor improvement over time, but also to support specific activities that report on a six-month cycle.
It includes both rates and member-specific detail, rates in terms of mammography and PAP smear screening, emergency room utilization, asthma admissions and hospitalizations. And then in terms of member-specific detail, it provides our practices with information about persons who either have not received the intended service or members who have access services.
It means being seen in the emergency room or admitting to a hospital for an asthma-related episode, again, to support their activities in terms of developing reminder systems, improving their follow-up to care, coordinated care with hospitals and other groups that help serve their members.
In terms of things that we do at the plan level, not provider specific or PCC practice specific, we have an asthma quarterly report that shows us, actually, way back from '92, '93 moving forward the emergency room and hospital admission rates for asthma over time by quarter.
It also gives us estimates of prevalence and gives us some idea about an issue related to observation bed stays and how rates are changing in terms of utilization of observation beds for asthma.
We're also developing a similar report for emergency services utilization. Again, that data is to trend or track emergency services utilization by plan members over time to help us, to help us form quality improvement activities that we can focus on. That's the PCC plan.
Mary Beth is going to pick up with some activities specific to the HMO program and then wrap up with some concluding remarks.
SPEAKER FISKE: I just wanted to briefly mention before I get into the specifics here that Tony's reference to the HEDIS and the member satisfaction collection does cut across all the HMO's and the PCC plan, and he and the members of his staff have been key in coordinating those efforts with the HMO's.
So we have a lot of projects that cut across here and I think have provided us with valuable new information that we didn't have before.
I'm going to talk about activities that are specific to the HMO program and the way that we manage contracts with vendors as opposed to activities that we may be doing in-house.
The first and main focus of this is our quality improvements initiatives are linked to our management of the HMO program. We try and specifically look at our contract requirements. We have a very intensive focus on purchasing specifics that you may have heard of before and the way that we approach our contracts.
These quality improvement activities are designed to measure whether the plans are meeting our contract requirements or not. Our contract requirements are detailed in a number of areas and are established to be stretched goals in a number of cases where we don't expect or we don't believe that all plans are going to be there on day 1; but we try and develop quality improvement initiatives with them to get us over multiple years in the contract, where we want to be in terms of management of our HMO's.
All the HMO's must perform annual quality improvement activities. These are amended to the contract so they actually become part of the signed contract that we have with the HMO's.
We most recently have ended up with about eight quality improvement goals each year, four that are standard; by that I mean, all the HMO's have a similar goal, topic and activities.
Some goals are more standard than others -- I can explain that in a minute -- and then three or four that are plan-specific goals.
The standard goals for the past year have been focused on well child visits, behavioral health access, behavioral health tools. There's been more variety within the behavioral health components of the activities depending on where we thought and the HMO's thought the work needed to be done for their particular plan.
We focus on serving members with disabilities and on basic operations like enrollment data because we send enrollment information to the HMO's on a calendar day basis. We wanted to make sure they were able to upload that information accurately, provide access to care if there were questions.
The sources of how do we come up with what these quality improvement goals are and what the measurements of these quality improvements are, are a number of areas some of which have been mentioned earlier. One of the evaluations are RFR responses, the original submissions that vendors made to the Division when they proposed a contract for us with our HMO program.
The current RFR responses for the current contract were submitted in 1994 for our three-year cycle, '95, '96 and '97. We're currently in the process of reviewing the next round of RFR submissions for our contracts that will start in July of '98.
So for the first year, we relied pretty heavily on the RFR responses as well as previous information that we may have had on those particular plans if they had contracted with the Division in the past.
We also look at previous years' goals, identify what activities we thought and the plans thought were helpful and that were not complete. Often times one year is not sufficient to be able to develop, implement and remeasure the activities to see if there has been a positive impact on changes for access of quality of care for the members.
And we also look specifically at contract management challenges where the HMO's are maybe having difficulty in terms of clarifying benefits that are in or outside of the capitation where HMO's may have different approaches for interpretation; and we want to make sure that people are getting consistent access to benefits.
Information that I'll go into a little bit later relating to what members' health needs are when they come into the HMO, we have limited information as members sign up for MassHealth who come to us through SSI applications for MassHealth about their health status, their health needs; and the information that we send to the HMO is therefore also limited.
So one of the challenges that we've been focusing on is how to get better information sooner on the health needs and the health status of enrollees and how to work with the HMO's to do that.
HEDIS data Tony already mentioned if we look at that, cross plans. We look to identify where plans are doing relative to our other contractors, relative to our PCC plan and relative to whatever national benchmarks we have available.
Results of member satisfaction surveys, similar type of approach, looking to see where there are similar issues to access to care, access to behavioral health services, quality of care, time spent with providers, information available or providers available that speak the language, those types of questions.
And other available data that we look at includes disenrollment information, people who voluntary disenrolled from the plans. At what rate do they disenroll from the plan, and what were their reasons for their disenrollment? We actually don't have lock-in in Massachusetts.
By that, I mean individuals can switch out of a health plan at any given time. They're not required to stay in a plan for a minimum number of a month or six months or year. We do have relatively low disenrollment rates though, 3 or 4 percent.
So we're able to manage that, and the HMO's are able to manage that without major changes in their population while still giving members access to move. And we look at the reasons for which they move which may be difficulty accessing transportation, difficulty accessing behavioral health, didn't like the doctor, difficulty with specialty care.
All those are standard codes that we have that our enrollment broker is actually the one who captures this information for us and provides it to us through our systems work as well.
So once we've established all these goals, how do we go about measuring HMO performance to them? The goals can be very different from one HMO to another in terms of the activities, but the timelines are the same. We go out and measure the HMO's every six months.
It is a very labor-intensive process to negotiate these goals and to negotiate the measures and the timelines. If we don't spend enough time working on that up front, then when we go out to do the evaluation, it creates more questions and confusion for everybody.
So that's one of the things I'm going to get into in a minute with lessons learned. The semi- annual contract status meetings start about two weeks before we start getting, or about a week before we start getting these huge three-ring binders from the health plans with all the activities and the data they have supporting their activities for the past six months on the goals.
We review that material internally, divide it up among a number of individuals. We've started more recently to include clinicians that have experience from the PCC plan or from our fee-for-service program to review the HMO's asthma management goals, to comment on that and how it compares to what we're doing in the PCC plan or other types of activities.
We then go out, a large group of us, a large group of the HMO's and have about a four-hour meeting discussing these documents, asking questions, getting clarification and reviewing the activities, the timelines and the results, looking to see what they did, how it compared to what we expected to see, what they expected to see and to other HMO's.
Particularly when we have standard improvement goals across the HMO's, it's easier for us to say one HMO went a lot further in their efforts, or they're able to get better results than another.
It's difficult to do this at the six-month time period. Really, we need a year to look at an activity across the board, and we score each of the HMO's and rank them overall based on this. We score the HMO's on whether we think they exceeded the goal as it was negotiated, whether they met the goal, partially met the goal or didn't meet the goal; and then we give them an overall ranking compared to each other.
We use that information in terms of how they rank to provide incentives for the HMO's for performance. If an HMO fails to meet a number of their goals across the board, we have frozen enrollment in that HMO for a short time or for a longer period until they've come back in with additional information on that.
We have also provided incentives for plans that score well. We've considered this in our rate negotiations with the plans, and we've also considered it in the way in which we assign members who have not chosen the plan, giving a preference to those plans that scored higher on our quality improvement activities than those that did not.
But I won't pretend that all of this information is so objective and easy to score and that it's easy for us to determine whether one plan has done significantly better than another plan on some of the goals.
A number of the things that we try and look at is how different were the goals? How much level of effort was involved? We want -- we don't want to penalize a health plan that decided to take on a very challenging activity where we all think that there's a lot to be learned and has "failed" or didn't get the results that we had all hoped for and only reward plans who try easier things.
So there's a lot of discussion in this negotiation process and in the meetings about how are we going to do the scoring? How do we evaluate the plans?
In some of the lessons that we've learned with this contract management approach, we use standard work groups with the Division representatives and the HMO representatives to help on a number of the standard improvement goals as well as some of the activities that cut across.
But some of the important things that we focus on here is to decide on the goals and measures early; meaning, we're trying to do this every year. We're trying to do it on a six-month cycle. If we're evaluating plans in June and December and we're trying to start new improvement in January or July, depending on when our time cycle is, we don't have a lot of time to assess performance and then start planning for the next year.
And that's always a challenge, the transition before you lose three months or two months in the next year.
Again, as I mentioned, we try to link the quality improvement to the contract requirements. We have a number of contract requirements on access to care, timeliness of appointment visits, care management programs for asthma, for special populations, those types of things that we can't measure as easily through HEDIS or that we can't measure through member satisfaction, a supplemental approach to the data that we have available in-house.
We try and be very specific on expectations. There have been a number of cases where we get to a meeting, and we expect to see a certain thing from the HMO's; and they were interpreting the same exact words on the paper very differently. We don't like to be in that situation.
We'd like to have more regular meetings with the HMO's, quarterly meetings with specific problem goals so no one's surprised when we get there because it's a very difficult dynamic. On the one point, you're trying to have a brainstorming session and a collaborative session on quality improvement, and what can we do to better provide care for MassHealth members?
On the other hand, we are there to score their performance. There's a difficult dynamic on that. We try to be aware of data limitations and try to identify resources and processes that we need to establish ahead of time and the HMO's need to establish ahead of time for each stage of the process.
As I mentioned throughout, collaborations with the HMO is a must. We often try and coordinate with them for activities that they may be doing for their commercial population where it would be relevant for MassHealth for them to collect similar information or slightly modify their programs and their materials for MassHealth so that we're able to capitalize on resources that the plans are dedicating to specific improvement goals corporate wide and to develop appropriate incentives.
As I had mentioned, we have both financial and volume incentives for the contract management. The last two sort of pages of the handouts relate to data challenges and data strategies that cut across all of what we've talked about here and not just the HMO program.
One of the things that we're having challenges with now is collecting birth weight data. Having birth weight data available to us and being able to link that data to Medicaid enrollment filings is difficult.
And Tony and Marjorie could talk about that in more detail than I can about our efforts in Massachusetts to try and link that information and come up with baseline data on birth weight for MassHealth across different managed care plans.
It's also a challenge for us to collect reliable race and ethnicity data. I don't think that's a surprise. That one of the things that we look at we have different sources of data as Marjorie had mentioned. We do ask this question on our applications, and we've been in the process of changing our application for MassHealth to try and better collect some of this information on race and ethnicity.
It is still an optional field for members to decide whether they're going to submit that information or not. It's difficult for us. We can't deny somebody eligible for MassHealth services because they didn't fill out information on race and ethnicity.
Trying to come up with an appropriate way to capture that information that allows for people to identify with the group and feel comfortable that this is how they want to fill out the information while at the same time providing us with data that we could capture in a system and look at patterns across so that do you have five different fields in which people can choose, and then you have another?
Do you have 30 different selections? How do you deal with that? We tend, we think at this point, I shouldn't say we tend, to think that we may be getting some better, more detailed information on language, at least, on some of these other race and ethnicity issues in our surveys, our member satisfaction surveys.
So that's one of the things that we've been looking at is trying to compare survey information. We ask people to self-report compared to our eligibility or enrollment information. And the recent review looked like when we did the member satisfaction survey, we allowed people to submit that in either English or Spanish. We had two different versions.
And of the people who responded in Spanish using the Spanish version, 85 percent of those respondents were individuals that we had identified on our claims system as being Spanish speaking. So we thought that was a pretty good percentage that we had captured that amount.
The other things that we're looking at for data challenges is trying to identify persons with special needs. As Tony had mentioned, we had information on categories of eligibility, meaning whether a person is eligible due to a disability or due to some other reason. So it's very black and white. You know, who's on SSI, and who isn't?
Beyond that, we don't always have very much information, and we don't get a lot of disabling condition information from SSI; although, we've been working with them to try and collaborate and have gotten some information about what disabling condition may have been the result of them being eligible for SSI.
That disabling condition may not be the most relevant for what those members' health care needs are or their utilization is. There may be other disabling conditions that aren't their primary condition or that have more health impact.
The other information that we look at, we have claims data, and we can look at utilization and try and identify people with disabilities based on claims; but then some of that information is not collected there either.
Particularly when you're trying to look at functional status or issues around whether somebody is paraplegic or has HIV or AIDS, often times that information you're never going to find on the claims. And you all probably are more detailed about that than I need to go into here.
And then again on the surveys, to the extent that we can find this information on the CAHPS survey or other vehicles to try and find out who's disabled, what does this mean and how do we use that information to go forward with providing access to care?
Availability of standardized measures and tools, as we were mentioning throughout, benchmarks are helpful. There aren't a lot out there for our population; and standardized measures where we have HEDIS, that's helpful. But there's only certain uses of data that you can, certain types of data that you can get from HEDIS, and what you can use that for is limited.
As well as CAHPS trying to understand what we'll be able to get from that member satisfaction survey compared to what we get. We did have difficulties in the past from responsory rates from member satisfaction surveys that the HMO's did or what we did with the HMO's.
Our in-house survey instrument and methodology seems to be a much higher percentage response rate. So that's something that we're concerned about. And conflicting needs and requirements for information where we're trying to focus on contract management for our specific programs here, there may be some other requirements or needs for information on a federal level, and how do we address trying to get the minimum that we need for the federal requirements and the minimum that we need for our contract management given the limited system and management constraints that we have?
All of these things that we do are very labor intensive. And as we try to prioritize our tasks, we often run up against what's the minimum that we have to do here, and how do we prioritize? Unfortunately, that's the reality.
Then the data strategy in MassHealth was the summarizing page here talks about how we do go about prioritizing our tasks and information. We try to prioritize our task collection for maximizing impact. We look at coordinating data collection across all types of managed care plans where it's appropriate, and that's been very valuable for us to look at that.
We try and understand the sources and the uses of the data and make sure we're being consistent with how we're doing that, and we try to develop systems and approaches that support improvement activities and contract management.
So these are a brief overview, or maybe not so brief overview of how we approach it, and we can, any of us can respond if you have a specific question.
MS. IEZZONI: Thank you. Let me just say that one of the reasons that we came to Massachusetts is that we were told that you all are among the leaders in the country in your data approaches for Medicaid managed care. So we've heard a little bit about why that is the perception.
That said, I suspect that we have some questions for you. Yes, Paul and then George.
MR. NEWACHECK: Yes, this is very impressive. I'm quite impressed with the data collection and the analysis efforts you guys did. I would be curious as to your sense as to where you stand vis-a-vis other states in terms of degree of sophistication in your approaches to collecting data and analyzing data on the Medicaid managed care population.
But I do have another question, Does anybody want to respond to that though first?
SPEAKER FISKE: Well, I can start. Before I came to Massachusetts, I worked in Washington D.C. on Medicaid managed care issues and also worked for a large managed care corporation. So I spent some time flying around the country and talking to other states about managed care in Medicaid, but that was a number of years ago now.
I think Massachusetts is more advanced than a number of states in our ability to collect the data and to analyze the data for managed care. We've been doing this maybe longer than some of the other states have. We also have an advantage to the extent that we have a number of active groups in New England. The New England HEDIS Coalition has been very strong on the commercial side.
The Massachusetts Health Purchasers Group have done a number of things on quality improvement. So to the extent that we can collaborate with activities that are already going on here in this state I think we have an advantage over other states, particularly some of the states in the south that may not have had HMO's around for a long time either or HMO's involved in the Medicaid programs.
The HMO's that we contract with, all of them have been credited for a long time and have been working with the Massachusetts Medicaid program since the early eighties in a voluntary environment.
There's where I think we have some advantage. Besides the overall investment, the state has always attempted to take in our data systems and our processing and our staffing. I think we may have more staff, even though it feels like we don't, than other state Medicaid agencies.
MS. IEZZONI: Paul, you had another question?
MR. NEWACHECK: Yes, my other question concerns the goals and ranking system that you've developed. I want to know how receptive the HMO's have been to that approach because I think it is somewhat unique at least at the state level and how responsive they are to the penalties and rewards that you offer depending on how well they meet their goals.
SPEAKER FISKE: I think it depends on the HMO to be honest in terms of how well they accept this and how well they do going forward.
There were, I think there are a number of HMO's. The ones that we contract with now have been working with us and with this approach for a number of years. And what I've heard when I've gone around to the HMO's is that for the most part they feel like it is a very useful process to them.
It helps them identify resources internally and to coordinate activities specifically for this contract. A number of things that we've done with them they have also extrapolated into their commercial membership. So it's not necessarily something that's only applicable to MassHealth.
We do have debates about scoring. There are times just about every plan at one time or another has, well, since I've been here which is two years, has come in to lobby or get better information or understanding as to why they were scored a particular way on a particular goal.
And as I mentioned, some of it can be subjective as to whether they achieved what we all had thought they intended to achieve when we started out.
And there have been suggestions for improvements. We're constantly trying to improve the process and to work collaboratively with the HMO's to do that.
So I think overall they're very responsive. The ones that aren't responsive we've frozen enrollment, or they've left the program. So we may have a selection bias going here as well.
MR. NEWACHECK: Thank you.
MS. IEZZONI: George.
MR. VAN AMBURG: I have a couple of questions too. Marjorie, twice and maybe three times in your presentation, you emphasized that you're promoting uniformity in the data collection, not standardization. Could you elaborate on what you meant by that?
SPEAKER PORELL: Well, even in the process with Medstat, we came up with a standardized set of data elements. And there's an agreement on those, but what happens we find is that even though we'd like some potential for flexibility, for example, DMA tends to use their own provider I.D.
We've relied heavily on provider I.D. which is not the tax I.D. of the provider. It's a unique Medicaid number for the provider. We also have another set of provider numbers related to our managed care providers.
When we get into negotiations of we're going to be pulling data from our HMO's and our own data, it becomes difficult. And then working with Medstat to come up with some way to pool the information because for our purposes, the PCC plan is submitting data at the same time as the HMO's are for the minimum data set, and that can be quite extensive.
We also noticed the tables that are considered to be standard, for example, for place of service or provider type off the HCFA 1500, Medicaid has their own provider type listing. It's easy to do crosswalks on this. We just needed some flexibility in terms of mandating that you must put a HCFA 1500 provider type in a particular element.
MR. VAN AMBURG: Where are you on responding to the forthcoming HIPAA standards for transactions?
SPEAKER PORELL: You mean for the 64 HCFA, the sampling data sets that we send?
MR. VAN AMBURG: No, the electronic medical transactions.
SPEAKER PORELL: I'm --
MS. IEZZONI: This is actually an important issue that I talked to Brian Burwell about yesterday. The Health Insurance Portability and Accountability Act that was signed by Clinton
in 1996 required that systemwide throughout the United States every health care transaction has specified types including encounter data follows specific transmission standards.
And Brian had indicated that they haven't thought about those in designing the program for you guys --
SPEAKER PORELL: For the encounter project?
MS. IEZZONI: Yeah. But in fact, those standards are supposed to be implemented throughout every health care transaction. The standards include a standards provider I.D.,
Marjorie, right?
MS. GREENBERG: It hadn't been implemented yesterday.
MS. IEZZONI: But it's something that everybody is heading toward. So George, that was the focus.
SPEAKER PORELL: And I think as far as in that regard, we do to be honest with you need to work internally to set up protocols. I think it's going to be very helpful as we move forward with the encounter project to negotiate our way through it.
Our provider I.D.'s are probably the weakest part. I'm not sure if that's the case with most systems. Internally, we do well, and we report well. If we had to be merging with Medicare data, for example, we can look at similar providers and work on -- we have a hurdle to work on.
MS. IEZZONI: George, I know you have another question, but let me underscore that it's probably going to be hard for some of your plans if the rest of the system is asking them to report using the HIPAA standards and Medicaid is not.
There could be a real disconnect there that could be hard for some of the plans that you contract with. George, you have another question?
MR. VAN AMBURG: Actually, I've got two. This one is for Anthony. You had indicated in the encounter data section of collecting data and quality indicators, and in fact, you gave developing your own quality indicators from the encounter data set to relieve the plan from having to submit the quality indicators themselves. What quality indicators?
SPEAKER ASCIUTTO: Those are the questions I fear the most. My memory is terrible. We're starting with the set of administrative measures for HEDIS that we can collect. The mental health and chemical dependency measures, access to primary care providers, those are the ones that -- well child potentially is part of it.
I know some of us rely on medical record review for some of those measures, that it's the measures that consistently go back to their administrative claims consistently to collect.
I didn't bring the full set, and that's one of the things that I don't store.
MS. IEZZONI: Is it possible for us to get a copy of that?
SPEAKER ASCIUTTO: A draft of the indicators that we think will develop?
MS. IEZZONI: Yes.
SPEAKER ASCIUTTO: I think we can do that with the specs.
MR. VAN AMBURG: The last question is to Marjorie. You obviously are running a data services unit. I'm kind of curious about your staff composition. I asked the same question in Arizona with respect to statisticians, having knowledge that people would like to analyze and probe data in relationship to systems analysts versus programmers. What kind of staffing do you have that way?
SPEAKER PORELL: Well, the systems department is split in different ways. We have a unit that deals with just the whole issues related to our claims processing and the changes that happen because of whatever contractual issues we deal with.
We are the unit that is responsible for putting together any files which are used for reporting, monitoring HEDIS data collection, survey collection of recipients for continuous enrollment.
So our staff, we have one statistician, but most of our staff are MS level at a senior analytical point of view. There's not a lot of us, and then we have hard core Natural programmers. Our claims data recipient data is stored in Natural. So we rely on Natural programmers.
They're very traditional programmers. We also have and it's very difficult to get SAS programmers. We take snapshots off the claim with which we use for the quarterly benefit plan production, and we use a lot of analysis on it.
So it's essentially a snapshot of the claims up to that point, a snapshot of the recipients. As you know, our recipient eligibility is day specific which is a whole nightmare in its own right.
I mean, it's hard, it's wonderful because we can tell at any point on a claim who your PCC provider is. We can tell you if you're with an HMO. And we have wraparound services, for example, like drugs, we'll know that you are an HMO. So we're able to be precise on that.
But SAS, most of the level are MS senior analytical types who work with the staff including Mary Beth and Tony. And we have outside -- they contract frankly outsiders for the more, quote, rigorous academic efforts where there is an epidemiologist.
We have a lot of contractual relationships with different groups, either large Medstat, Mercer or our actuarial; and there are smaller ones for gerontology, and we work with their staff in designing the applications that are needed.
MS. IEZZONI: Other questions? Hortensia.
MS. AMARO: You mentioned that you had some problems in collecting race and ethnicity data. Can you tell us a little bit more about that? Is it missing data or categories or what?
SPEAKER FISKE: It goes back to the discussion that I was saying earlier in terms of trying to understand what's the best way to ask these questions, to make it comfortable for people to respond to the questions and to categorize the answers when you receive them.
What we've been hearing from a number of outreach workers and different groups in the community is that people are much less willing to fill out that type of information on applications, that they either don't want to or are afraid to or think that we may report it to INS or whatever else in terms of if it is linked to their immigration status or linked to their determination of benefits.
MS. AMARO: Race and ethnicity or immigration status?
SPEAKER FISKE: Immigration status is separate from that, but we think that there's also concern about people filling out some of that information as a result of overall anxiety about the immigration status issue, not necessarily that those are linked, but just in terms of peoples' concern for filling that out.
One of the other things that we've been focusing on, and I haven't been as involved in the work groups on this particular piece, is to figure out what's the best way to ask this information on our applications that people will be most likely to fill it out and that we would be most able to use that information in terms of reliability going forward.
We also don't get information -- we don't get an application from anybody who is eligible due to SSI. So that information that we have to overcome on language or ethnicity is very limited as far as I understand it.
And the more that we move away from some of the traditional Medicaid eligibility criteria into larger groups where we have now our 1115 waiver, and we have long-term unemployed and other individuals who are eligible for MassHealth in Massachusetts, we are trying to monitor how we may need to modify our application process and the questions that we ask to collect that information.
SPEAKER PORELL: I think it's important to recall that it's an optional data element on our enrollment. That's where our reliability when we speak of reliability is brought into question. It's still optional.
And legally I think from what I was told that we, you really, people don't have to answer that question. I mean, legally you cannot force someone to submit it, but it is considered an optional item.
MS. AMARO: So all other items you can force people to answer, or you can deny them if it's based on that?
SPEAKER PORELL: There are other items on the application; but if I recall on the application, that's one of the clear ones marked as optional.
SPEAKER FISKE: We cannot deny benefits based on the fact that they didn't answer that question. There are other minimum fields that are required; and if they're not filled out, we could deny benefits or deny redetermination of benefits I believe linked mainly to income and SSN, but I'm not even sure of SSN.
MS. AMARO: I'm trying to understand this a little bit more. You have a way of asking race and ethnicity on the form, and it's marked optional. What's your missing data? Is that your problem? Nobody's answering it?
SPEAKER PORELL: It's not a large number that fall into the unknown category, but people say there could be an issue when it comes to the system. Or there could be a face to face encounter, if a person refuses to answer, there will be an indication put on the record in terms of the person's enrollment --
MS. AMARO: So they don't put down refuse to answer? They try to fill it out?
SPEAKER PORELL: Right.
MS. AMARO: That's an issue though to sort of instructions.
SPEAKER PORELL: And it's not, I think our bigger concern, if you aggregated for example enrollment information across the HMO the PCC plan in some logical way, I think we would have less problems with it. But we have some issue where people are getting down to a recipient specific level.
We're concerned that if people take it at a recipient specific level, we can't say 100 percent of the time that it's reliable that what's on that particular application or in our data file we would be comfortable with presenting out to a large population.
SPEAKER FISKE: We also have two different things where we have the large number of people who are currently enrolled in MassHealth and have been enrolled for a long time, and then you have people who are coming in and filling out the new forms with different -- the questions are asked differently, and we may be getting different information now, hopefully better information as we go forward.
But then you deal with the rest of the population that you have enrolled as well as I mentioned before, we don't have this information from people who are eligible due to SSI because we don't get any applications from them. We just get that data directly from SSI.
MS. AMARO: I'm trying to get a sense of what the problem was. Does that mean that you're not able to sort of track the effects of the managed care program across population, in this case, by race or ethnicity?
SPEAKER ASCIUTTO: If we have to rely on the information that he's found in our eligibility files, we feel less comfortable with that kind of analysis. If we rely on information that might be self-reported, say the member satisfaction survey, then we have a little more comfort in terms of looking at results across different groups.
So it is a matter of, you know, what is the activity, and where are we pulling the race and ethnicity information?
SPEAKER FISKE: And how many of those individuals do we think that we have in the managed care program overall and in each of the health plans? When you get to a smaller number of individuals or groups, how reliable is that information?
MS. IEZZONI: Can I ask one follow-up question on that topic, and then we're going to have to stop I think to start with the next panel.
We heard yesterday an interesting presentation about how many Haitians there are in Boston and on the managed care program who speak Creole. We also heard about Portuguese as the language that people speak.
We understand that your satisfaction survey is in English and Spanish; and the populations that I've just referred to, the Haitians who speak Creole and Portuguese, they have their own special access issues.
Do you have any initiatives to try to address problems from those two populations or other populations that aren't Spanish or English speaking?
SPEAKER ASCIUTTO: When we do a survey, a broad-based survey, we do have a babble card which I think the present listing is thirteen languages which indicates this information you may need a translator.
So at least presently for those populations where we don't have an instrument targeted for them, we do have the medical card to highlight that.
MS. IEZZONI: And what are your response rates for that?
SPEAKER FISKE: It would come back in the English version. It wouldn't know -- we don't know --
MS. IEZZONI: You won't know.
SPEAKER ASCIUTTO: We often find a bit of a mismatch for what we believe to be the case in our eligibility files compared to what we find in self-report. I'll give you a big example.
We send out surveys, and we know these people belong in this plan, Plan A or B or C. We get surveys back, and virtually 25 percent say they belong to another plan. So there's, there is inconsistency at that level.
There are, when we get down to a specific program or an activity, asthma for example, we had a pilot program recently where we developed improvement activities in four sites. One of those sites had a large patient population.
That program worked to develop materials specifically for that population; and when that activity happens, we try our best to obtain those materials and then make them available more broadly to other practice sites who can take advantage of that.
SPEAKER FISKE: That's similar to what I was going to say for the HMO quality improvement activities. The HMO's, we also require them to include those types of babble cards and the information that goes out in other nguages directing them to get assistance in having it translated or calling the HMO to get assistance about more information in another language and that they speak their languages.
All the HMO's that we contract with do have access to the AT&T language line, but that's limited more for people that are calling up. It's easier to use obviously than face to face, and the quality improvement activities that they have, often times they'll develop specific brochures or information for certain populations that speak other languages.
One of the health plans that we have has actually a large Khmer population. So they've done a number of things with Khmer providers to provide translation and work with that group. So it depends also on the service area and the members enrolled in that health plan.
MS. IEZZONI: This has been extremely helpful. I'm sorry that we had to rush you so, but we really thank you for taking the time. And if you could just look into whether we could get some of the information that we talked about, the Medstat specs; and if you want to learn anything more on HIPAA, you can log onto our Committee's web site.
SPEAKER PORELL: Thank you very much.
MS. IEZZONI: Thank you. I'd like to rush right on to the last panel because we do have four speakers, and we have an hour. Here's our first speaker.
(Discussion off the record.)
(The testimony of Dr. Bergman, Dr. Upshur are not able to be transcribed due to the speed of their testimony resulting from their time constraints and Dr. Chan's speed coupled with his heavy accent.)
(Whereupon, the proceedings adjourned.)
C E R T I F I C A T E
COMMONWEALTH OF MASSACHUSETTS ESSEX, SS.
I, Christine L. Larkin, a Notary Public duly commissioned and qualified in and for the Commonwealth of Massachusetts, do hereby certify that the preceding transcript is a true transcription of my stenographic notes taken in the foregoing matter taken to the best of my skill and ability.
IN WITNESS WHEREOF, I have hereunto set my hand and Notarial Seal this ________ day of _______________, 1998.
CHRISTINE L. LARKIN
Notary Public
My Commission Expires: May 8, 2003
THE FOREGOING CERTIFICATION OF THIS TRANSCRIPT DOES NOT APPLY TO ANY REPRODUCTION OF THE SAME BY ANY MEANS UNLESS UNDER THE DIRECT CONTROL AND/OR DIRECTION OF THE CERTIFYING REPORTER
COPLEY COURT REPORTING