[This Transcript is Unedited]

NATIONAL COMMITTEE ON VITAL AND HEALTH STATISTICS

AD HOC WORK GROUP FOR SECONDARY USES OF HEALTH DATA

August 23, 2007

Hubert H. Humphrey Building
200 Independence Avenue, S.W.,
Room 305A
Washington, D.C.

Proceedings By:
CASET Associates, Ltd.
10201 Lee Highway, Suite 180
Fairfax, Virginia 22030
(703)352-0091

Table of Contents


P R O C E E D I N G S [9:05 a.m.]

Agenda Item: Introductions and Overview

DR. COHN: Okay, good morning. I want to call this meeting to order. This is
a meeting of the Ad Hoc Workgroup on Secondary Uses of Health Information of
the National Committee on Vital and Health Statistics. The National Committee
is a statutory public advisory committee to the U.S. Department of Health and
Human Services on national information policy.

I am Simon Cohn. I’m Associate Executive Director for Kaiser Permanente and
Chair of the Committee and this Workgroup. I also want to welcome Committee
members, HHS staff and others here in person, and, of course, welcome those
listening in on the Internet and remind everyone as always speak clearly and
into the microphone, especially given the nature of the logistics of this room.
I think if we’re not careful, we’re not going to be able to hear each other
much less those on the Internet.

With that, let’s have introductions around the table and then around the
room. For those on the National Committee, I would ask if you have any
conflicts of interest related to any of the issues coming before us today,
would you so publicly indicate during your introduction.

I want to begin by observing that I have no conflicts of interest. Harry?

MR. REYNOLDS: Harry Reynolds, Blue Cross Blue Shield of North Carolina, a
member of the Committee and Subcommittee, no conflicts.

MR. ROTHSTEIN: Mark Rothstein, University of Louisville School of Medicine,
member of the Committee and the Working Group, no conflicts.

MS. JONES: Monica Jones. I’ve come over from the U.K. for this meeting
today. Thank you very much for the invitation, and I don’t think I’ve got any
conflicts.

DR. OVERHAGE: Marc Overhage, Regenstrief Institute, Indiana University
School of Medicine, a member of the Committee and the Workgroup, and no
conflicts.

DR. STEINDEL: Steve Steindel, Centers for Disease Control and Prevention,
staff to the Workgroup and Liaison to the full Committee.

DR. W. SCANLON: Bill Scanlon, Health Policy R&D, member of the Committee
and of the Workgroup, no conflicts.

DR. FITZMAURICE: Michael Fitzmaurice, Agency for Health Care Research and
Quality, Liaison to the full Committee, staff to the Workgroup.

DR. ANDERSON: Kristine Martin Anderson from Booz-Allen & Hamilton and
contract support for the Workgroup.

MS. GRANT: Erin Grant from Booz-Allen Hamilton, contract support for the
Workgroup.

DR. VIGILANTE: Kevin Vigilante, Booz-Allen & Hamilton, member of the
Committee, no conflicts.

DR. DEERING: Mary Jo Deering, National Cancer Institute, staff to the
Committee and to the Workgroup.

MS. JACKSON: Debbie Jackson, National Center for Health Statistics,
Committee. Staff.

MS. AMATAYAKUL: Margret Amatayakul, Contractor to the Workgroup.

DR. CARR: Justine Carr, Beth Israel Deaconess Medical Center, member of the
Committee and the Workgroup, and no conflicts.

MS. VIOLA: Allison Viola, American Health Information Management
Association.

MS. WOOSTER: Laura Wooster, Blue Cross Blue Shield Association.

MS. INGARGIOLA: Susan Ingargiola, Manatt Phelps & Phillips.

MS. COHN: I want to welcome everyone. As I said, this will be a close and
personal session today. I just make a comment about logistics. Obviously, this
room, I think, is set up for the moment well for taking testimony as we begin
to get into conversation later on this afternoon, we’re going to be moving
people more into a round circle situation moving people here. But certainly,
for the testimony, we’ll be obviously looking at it here just to be aware that
we will be moving things around as the day goes on.

Let me make a couple of opening comments. Obviously, today marks the
beginning of the third set of hearings on secondary uses of health information.
Just to remind you, specifically we’ve been asked by the U.S. Department of
Health and Human Services and the Office of the National Coordinator to develop
an overall conceptual and policy framework that addresses secondary uses of
health information including a taxonomy and definition of terms as well as
develop recommendations to HHS on needs for additional policy, guidance,
regulation and/or public education related to expanded uses of health data in
the context of the developing nationwide health information network with, as I
think we’ve discussed, an emphasis on the uses of the data for quality
improvement, quality measurement and reporting.

I want to thank Harry Reynolds and Justine Carr for being willing to be
co-vice chairs and will be turning the session over to Harry in just a minute
to sort of facilitate the day’s hearing.

I obviously want to thank the rest of you, some of you were not here, people
such as Paul Tang, but Bill Scanlon, Marc Overhage, Mark Rothstein, Kevin
Vigilante for being willing to serve on this Workgroup. We obviously also want
to thank all of our support including Erin Grant, Kristine Martin Anderson, of
course Margret A who has been my lead support for helping us move to this
point.

Of course, also our liaisons, John Loonsk isn’t here today, but Steve
Steindel, Mary Jo Deering, and of course our Debbie Jackson who is not liaison
but is also a key support. And of course the other support from NCHS that has
really made this possible today.

As I have said previously, we’ve obviously spent a lot of the summer on this
project. I want to thank you for giving up your vacations, saying goodbye to
your family on such a frequent basis all summer.

Having said that, I was impressed recently as I thought about this on a
Sunday with Margret A about how much it’s going to take to get from where we
are today to recommendations in a draft form for discussion with the full
Committee in late September and from there to final recommendations in the
October timeframe.

So I think you’ve all been queried about possible dates for additional
conference calls. Obviously, we have a lot to do to get from where we are today
moving forward.

Now the agenda today, and I just want to remind everybody, we’re trying to
do sort of two things at once. We’re talking about a broad framework that
relates to secondary uses, but we’ve also been asked specifically to drill in
to the issues that relate to quality measurement, improvement and reporting.
And so we just need to be aware that the discussions are going to be basically
taking sort of, we’ll be talking broadly but then narrowly and then broadly
again, and everybody just needs to be able to tolerate that sort of moving up
and down the levels of specificity.

I know that many of you have been thinking about how we put all this
together. And having been in many other processes like this, I think it’s a
valiant effort. However, I do want to remind everyone that we’re not double
tilling yet, and I fully believe that we are going to be having very
interesting information coming from today and tomorrow that will help
illuminate our thinking in all of this. So I would just caution you to (a) keep
your mind open, be aware that we are probably going to be reconsidering between
now and the end of September five or ten times overall frameworks and
conceptualizations of how all of this comes together. And so I would just ask
everybody keep an open mind, listen, not come to judgments or opinions too
quickly in this whole area because I think there is much around here that is
nuanced and that we need to sort of tease out as we sort of move forward.

Now the agenda today is, I think, very interesting. I’m very pleased to have
Monica Jones start off with a perspective from the U.K. I think we heard from a
colleague of yours, for those of us who attended the American Medical
Informatics Association meeting in the, I guess it was late springtime, in
relationship to what was going on in the U.K., and I think it once again
provides, I think, a different perspective about how we deal with all of this
maybe not so much the subject matter, but how you deal with mitigating risk,
other approaches. And remember, part of our role here is also to look at tools,
techniques, approaches to help mitigate risk in all of these areas as we
identify that there are risks. So I think a different view from a different
country would be very helpful at this point.

Then we move into a discussion that I think begins to sort of push the
envelope a little bit because we’ve talked about the whole issue between
quality and research, commercialization, you know, and how even to talk about
those issues. And I think we’ll find in the second set of hearings that there
will be some discussion that begins to cause us maybe to look at this, maybe
rethink what quality, where the barriers are, or if there are any barriers if
all of this really begins to merge together. But I think it will hopefully be
sort of interesting set of discussions.

From there, we remove right before lunch to a discussion about
de-identification, and is in this world de-identification really possible,
which I think should also help inform our thinking.

This afternoon, we talked about additional technical solutions for various
issues that I think we going to need to be thinking about related to both
consent and other issues around mitigation risk.

And then the very important issue of communication. Now I would just remind
you that communication is a tool. It goes along with transparency. It goes with
trust, and I think we need to be a little more grounded in all of that.

Now at around four o’clock I serve a two follow item where we are going to
have an open microphone, and then we’re going to go to committee discussions
for the remaining time until about five thirty adjournment.

Just to remind you, we do start tomorrow morning bright and early at eight
thirty, but we’ll have everybody out by twelve thirty, and as I said, later
we’ll talk some about future meetings, hearings, sessions that we will be
holding as we move from these conversations into actual report generation.

Now with that, Harry, I will turn it over to you and ask if you have any
opening comments before we proceed.

MR. REYNOLDS: No, we’re right on time, so we’ll stay there. So Monica, we’re
really excited to have you here today. So if you would please begin and then
we’ll hold all questions until you’re finished, and then we’ll start from
there. Thank you. We’re also going to hold all questions.

Agenda Item: HIE Experiences

MS. JONES: Good morning. I’d just like to say thank you very much for
inviting me over to Washington. I’ve basically been asked to sort of give the
U.K. experience in terms of what we’ve actually managed to do with our
secondary uses service which is part of an overall major program of work within
the U.K. which is our national program for IT which is a ten-year program that
is essentially upgrading and putting a massive investment into the whole of the
NHS information infrastructure which is very exciting, but it’s also very
difficult. And what I’d like to try and give you today is a bit of the
perspective in terms of how, what we’ve sort of encountered and the type of
difficulties and picking up on the points that was said earlier in terms of the
tools and techniques for the sort of the risk mitigation.

I’m quite happy for people to either chip in or we can wait for question
until the end. But if I sort of come up with sort of U.K. type terms that
people aren’t sort of aware of, then please just let me know.

What I’d like to run through is essentially these six items. I probably need
to set the scene in terms of what my organization is which is the Information
Centre for Health and Social Care, and then really moving on to the purpose and
scope for secondary uses service, but also sort of exploring perhaps what we’ve
discovered there are also sort of primary uses for some of the secondary uses
data, and this has really sort of become apparent as we’ve actually gone
operational with our systems.

I’m going to give you sort of a very high level technical framework for the
secondary uses service, not getting bogged down in the IT, but really giving
you an idea of how the component parts fit together.

I’d then like to just touch on the regulatory, the legal and the ethical
policies that frameworks us, and how we’re sort of tackling those particular
ones. And as part of that, then I’ll touch a little on patient consent and then
really sort of wrapping up with the value to date, what our lessons learned and
what our future plans are.

So without further adieu, what I’d like to show you here is the information
for how social care is actually a special health authority. So we’re part of
the English NHS which there is a Scottish, a Welsh one and an Irish one within
the U.K., although there are huge amounts of a synergies within that and very
close cross border working. But I just want to sort of particularly make that
point.

A special health authority is essentially an organization that is referred
to as an arms-length body to our Department of Health. It operates very closely
under the direction and policy of the Department of Health, but has its own
sort of governance and its own chief executive and staff within that
organization to make it accountable.

And we were set up in April 2005, and we took on some of the
responsibilities of some previous organizations. One was the NHS Information
Authority, and the other was the Department of Health Unit for Statistics. So
as you look down these services, these 12 services that I’ve tried to highlight
for you, then you’ll see that it is not only about the collection and the
collation of data, it’s about the definition, it’s about defining what the
standards are, and the interoperability, but it’s also about producing
statistical returns and publications of which we do hundreds within a year. So
it’s just sort of letting you have that such a balance.

And speaking to Mary Jo sort over the last sort of couple of weeks, she just
sort of said to make the comparison to the NCHS, and I just sort of had a look
at the website, and I think that we’re actually probably doing very similar
things within the Information Centre in terms of some of the elements that we
particularly cover.

We are interestingly enough the Information Centre for Health and Social
Care which is the first of its kind and certainly within the U.K. that we’re
realizing that we want to try and move a lot of our sort of health services
from being very sort of hospital based actually out into the community and,
wherever possible, we want to sort of expedite that through the use of data and
information. So therefore we’ve got to be able to cope with the boundaries of
patients moving across those areas. And certainly the government at the moment
is really concentrating very much on health and well being and, therefore, it’s
about the public health elements of that.

I probably don’t necessarily need to say much about each of those 12 except
that an actual data set service which is essentially a department I sort of
head up which defines the data items mapping to classifications which I hope
you’re all familiar with, an ICD-10 and OPCS-4, and I’m very much moving
towards using clinical terminology such as SNOMED-CT coding.

And also the one in the box really which is the secondary uses of NHS care
record data, and the data sets work as the input specification for those.

DR. DEERING: Monica, you have information governments as a separate
“service.” Could you just say a word about that.

MS. JONES: Absolutely. That is it’s predominantly a function for the
Information Centre. So it’s not really about providing the information
governance for the broader sort of NHS. It’s more about making sure that we’ve
got our house in order. But it’s really a systems come online and that we’re
starting to actually sort of tackle the whole patient confidentiality and
security to make sure that we’ve got the correct legal advice and that we’re
liaisoning with our already existing ethical committees such as there are,
which are at various sort of levels. We have a local ethical committee and then
a sort of a regional one, and then there’s actually a sort of a national one
that is called COREC. But we also have this concept that there is unappointed
person within each NHS trust who is the senior clinician who is responsible for
the legal and ethical use of data, and they’re referred to as a Caldecott
guardian, and they actually have the ultimate role of making sure that the
patient’s data is being treated in the correct way.

So we sort of lace with that, but this information governance sort of
section within the Information Centre is predominantly about making sure that
everything that we’re doing is within those legal and ethical boundaries.

DR. OVERHAGE: I’m sorry, I missed the word, Caldecott Guardian?

MS. JONES: Caldecott guardian, yes.

DR. OVERHAGE: Okay, I’ll have to have you write that down for me later.

MS. JONES: Okay. It’s named after someone called Caldecott. Okay, what I
wanted to touch on now is really from an input or output perspective. I think
it’s really important, and I hope you’ll agree that we do have the sort of
correct tools and techniques for making sure that we’re able to standardize but
also use data and make sure that things don’t sort of fall through the cracks.
So we have this sort of concept that there are five different sort of types of
standards flowing from the left to the right.

So there is an input standard which is very much about sort of looking at
the existing systems our local service providers within our national program
and actually capturing standards, making sure that they’re coded correctly.

Certainly, since the implementation of our systems in the U.K., the
importance of actually coding correctly has become much, much more high profile
than it was previously. The poor little coders used to be sat in the dungeons
of hospitals being sort of ignored by everybody, and suddenly it’s become very,
very important and they sort of march into the CEO’s office and such as that,
and people have become much more aware of these things.

And then it’s about making sure that the data can flow. We have moved to
transferring data via XML schema. Previously, they went to sort of flat files
into a central repository. But this is about the validation and the
verification of the data and making sure that we keep the responsibility for
the quality of the data as well very much sort of with the providers and
keeping that sort of local ownership, while obviously there is a responsibility
at the central side of things.

Then some data processing and communication, so there are standards and
rules associated with that. I apologize because there’s quite a lot of acronyms
on this slide, but PBR is a system reform within the U.K. which is payment by
results. So it’s very much about the remuneration of services appropriate to
the service that is actually being delivered, which is it’s quite a step change
within the U.K. particularly and in all public environment, and I’ll touch on
that later.

And then it’s about, so those rules associated with it, then the actual sort
of warehouses themselves that we’ve got housed centrally, the major one of
those is the secondary uses service. And so there are appropriate standards
associated with that.

And then there is the output, an extract data set. Quite often, we started
with the right hand in terms of defining what our real sort of business
requirements are. There’s been a program over the last sort of five years of
setting up national service frameworks within the U.K. really to look at the
provision of service in particular specialties. And there were ten of these set
up, national clinical directors appointed to those particular sort of
specialties and referred to as czars that were really to look at where there
was disparity of service and try and even that out across the whole of England.

So there’s one in particular for cancer, for diabetes, for renal, for
children and maternity services, for older people, for – I can’t remember
the rest of them, and really it’s about sort of focusing in on that from a
specialty perspective. And sometimes the requirement for an output data set
would come from that particular matter or clinical director. And so, therefore,
in terms of developing the standards and getting the data to flow through, then
the output specification would be done first and then we would tick the boxes
going through the five sort of stages to make sure that everybody knew because
there’s not one organization that does the end-to-end bit, unfortunately, and
it’s making sure that somebody takes overall responsibility even if they don’t
have a raw sort of ownership to make sure that things don’t fall over in
particular stages.

And that in terms of this particular sort of slide is about making sure that
not only are we defining the standards, but actually we’re in a position to
turn that data into inflation and that we’re therefore able to actually use
that in the provision of outcare both in a primary purpose but also getting the
benefits from a secondary purpose.

So moving on to the secondary uses service, it’s essentially a repository of
care data for the use in care planning, policy development, performance
management, clinical audit and medical research.

And those are elements of sort of coming on line bit by bit. So it’s not a
sort of a big bang, and then you’ve got absolutely everything because our
infrastructure just wasn’t there in order to support it. And in terms of risk
mitigation, then you wouldn’t really go with that kind of approach.

So we had parallel systems that we were running previously and some systems
that just didn’t exist. But we knew and started to develop the concepts of what
we were going to bring into this.

So the aim in secondary uses service and it will provide a consistent
environment for data management allowing better provision across the sector,
very much focused on the protection of confidentiality through rigorous access
control and removal of patient identifiers from data transferred to warehouse,
although there are associated issues with that, and we’re still not wholly at
the point where we’re pseudonymizing and anonymizing data yet. We’re still
running through some pilot studies, and we concentrate very much on the
existing laws and processes. There’s no point in reinventing the wheel and
putting some kind of infrastructure in or some kind of legal framework that is
already sort of covered by existing legislation.

And the concentration is very much around the NHS care records service. And
what we’ve started to do is as each system comes online, there is a duty of
responsibility for the doctor, for the general practitioner that the patient is
registered who is normally the first sort of point of contact, to make the
patients aware that data is actually being captured electronically, and that
potentially it’s going to be used for other things.

It concentrates very much on the existing Data Protection Act, human rights
legislation and Common Law, and this brings in the responsibility of the
Caldecott guardians, as I mentioned earlier, the existing ethics committees and
PIAG which is a national body which is patient by identifier advisory group. In
order to be able to hold your, to be able to use patient identifiable data, and
you have to have PIAG approval, and it’s a pretty rigorous procedure to have to
go through. For example, some organizations will have standing PIAG approval
such as, say, cancer registries or some particular sort of organizations that
actually do need to have the identified data. But they come up for renewal on
an annual basis actually, and then they have to go through a full sort of
rigorous review on a five-year sort of basis.

Now we’ve recently published a document called the NHS Care Record Guaranty,
and you can actually download a copy of that from
www.nshcarerecords.nsh.uk, and
there’s a patient version and a practitioner version within that. It was first
published in May 2005, and this is also reviewed annually. And it’s published
in nine different languages, and it’s got an audio version as well.

And we ultimately aim to give patients access to their summary health care
records as well so that they can actually sort of check the details that are
being held on them are actually correct, but we’re not there yet.

So this is very much the sort of the mechanism that we’re using within the
existing sort of laws and governance. But, you know, why publication, but also
there is a policy that is a an opt out rather than an opt in. So your data is
always being held really on your record in a paper format, and the sort of
informed consent type of process that goes through from the GP as each of these
systems come online, we’ve recently launched a picture archiving record system
essentially for digital x-rays. So as somebody is being sent for an x-ray or
getting that, then it’s explained to them that this is now being kept as a
digital record of an analog record and the benefits of using that and the fact
that potentially an anonymized version of that could be used for medical
research. And at that point, the patients are given the option to opt out, but
there is an assumption that they will, you know, that your data will be used
unless you actually opt out.

MR. ROTHSTEIN: Can you explain what they’re opting out of.

MS. JONES: They would opt out of having their data held electronically.

MR. ROTHSTEIN: All electronics?

MS. JONES: Well, no, the particular data sets and the secondary uses of it.

DR. VIGILANTE: If they’re asked at each opportunity, or – I’m sorry, if
the question’s actually posed at the time of use, in a sense it’s not a
complete opt out; there’s an opting –

MS. JONES: The absolute, yes.

DR. VIGILANTE: So as you’re using it, and then if they become aware of it,
then they have to do something. It’s sort of you’re presenting it to them and
giving them the chance to opt out which is sort of a de facto opting –
it’s not a classic opting, but there’s an opting – the exercise. Isn’t it
right, I mean –

MS. JONES: There is, but it’s happening as – it’s an informed opting
out.

DR. VIGILANTE: Yes.

MS. JONES: The details, the absolute details are in this guarantee, and it’s
actually very sort of usable and meaningful. It’s quite useful if people want
to know more about that to get a copy.

DR. DEERING: Could you repeat that URL once more?

MS. JONES: Yes, it’s www.nhscarerecords.nsh.uk, and
it’s called the NHS Care Record Guaranty. It’s about a 12-page record, the
patient version. Okay, I thought I would actually sort of cover that within the
SUS rather than just taking it out of context. So the third bullet point we’ve
got there is access to timely data for analysis and reporting, and by
increasing the, using the technology to be able to get the data flowing on a
much more sort of timely way than obviously the effect is that you can do the
analysis and reporting in a more timely fashion.

And then finally, better data accuracy and a reduction in the burden for the
NHS. It’s a big thing within the NHS in England is this sort of concept of the
burden of data collection and information provision that we’ve made a point of
concentrating the action on actually providing care and not making the doctors
and nurses spend all their time collecting data. And certainly one of the key
aims for the Information Centre is actually to reduce the burden.

So where people want to do a new collection or a survey, then they have to
go through a pretty rigorous process which is called a Review of Central
Returns, ROCR, which is a committee that actually sits and assesses whether
this is a reasonable request that could potentially, or whether there is an
increase in burden associated with it and sort of survey, audit, collection,
and you have to have a – in order for a collection to become an NHS
standard, you have to have ROCR approval as well as our Information Standards
Board approval. And the aim of that is to reduce the burden with the ultimate
aim of moving things to taking them directly from the national care record. So
I do want some and share many, and that’s very much the principle behind that.

But there was a point in the late ‘90s where we were just sort of
getting terribly excited about collecting all of these things, and the burden
was just going up and up, and there was a backlash from the clinicians within
the NHS that we’re just not doing this, I’m not getting the chance to treat
patients and to put our record into this. So you guys go away and work out the
best way to do it and to use the technology, and then we’ll comply with it.

So it seems to be working very well. In the last year the ROCR committee
sits within the jurisdiction of the Information Centre, and we calculated we
reduced the burden by 11 percent last year, and it’s getting better all the
time.

So I’ve touched on some of these, but it’s probably useful to just put it
into context. The secondary uses service is but a small part of the national
program for IT. This was set up in 2002. It’s a ten-year program, and the
national application service provider who are the prime contractor is British
Telecom, and the contract was awarded to them in December 2003. There are a
number of subcontractors. There are a number of local service providers for the
provision of the systems across England that are a collection of consortia with
such main players such as CSC, Fujitsu, BT as well. I’m not sure that IBM are,
and then until recently Accenture and all that.

The data are transmitted by the Spine which is a national grid
infrastructure. The first five years of the national program has been very much
the sort of unsexy stuff. It’s been putting the infrastructure in place. It’s
about increasing the number of PC terminals within hospitals to I think it’s
sort of up to 18,000 now, and getting a standardization of the services in our
patient administration services which was called Paths. It’s about putting in
broadband secure connections. It’s totally publicly funded across the whole of
England, getting connect codes between every single hospital and also between
every GP surgery and getting the data able to flow between those. So it’s the
nitty gritty sort of putting all of your, setting out your store before you can
then start to put the applications on top of it. And some of the suppliers, I
think, have been quite surprised as to how little profit there is in that, if
any. I don’t think BT has made any profit yet, but they’re expecting over the
next five years for it to go up, and I think that has had a knock on effect in
terms of some of the suppliers that are changing their allegiance.

The main warehouse for the secondary uses service is being delivered by BT
as part of the national program, and it’s managed by us. Well, our sister
organization, NHS Connected to Health. Now the split between the Information
Centre and NHS Connected to Health is that they tend to deal with the IT and
the infrastructure and the applications and the developmental software, and we
very much got the responsibility for the data and the information and the flows
and the standards and the legislation and the governance associated with that.

In terms of the secondary uses service, it is very much a partnership
between the two of us, and as it says there, we’re particularly responsible for
the data definition, the analysis and the reporting.

We have a lot of statisticians working within the Information Centre as well
as analysts and both business analysts, systems analysts and information
analysts. So it’s those kind of skills that are covered with the Information
Centre which is over 400 people strong and rising.

Such is the single NHS wide system for processing commissioning data sets.
Now Mary Jo’s allowed me to say what a commissioning data set is. It’s
commissioning in essentially the purchase of services. So we – when a
patient goes to the GP, then the money that is paid in order to be able to
provide that service comes from a primary care trust, and they are referred to
us, the commissioner. So they for their local area commission the services. So
they look at their public health, they look at their epidemiology, they look at
their populations and they essentially request money from the government in
order to be able to provide those services which is cascaded down through a
strategic health authority of which there are ten within England. So the role
of the commissioner is really to make sure that they’ll able to provide the
service, that they’re getting value for money, that they’re efficient, that
they’re effective.

But if you’re therefore referred from your GP to secondary care, say, from
primary care to secondary care, then the commissioning responsibility moves to
that secondary care organization. So it’s therefore the responsibility of the
acute trust or mental health trust or the ambulance trust that is providing
that secondary care.

And in order to do that, we have a set of data standards called
commissioning data sets, and they’ve always been very event driven. And they
were set up in the late ‘80s. They were referred to as kernel returns at
that particular point, and in 1995 they became mandatory so that everybody has
to return the commissioning data sets for every event for an inpatient
attendance, but an outpatient, for an AME incidence, for really the whole area
of the predominantly secondary care. The standards are not quite as well
defined as in primary care, and we’re starting to work on that.

And as I mentioned earlier, those commissioning data sets are essentially
the ones that are used to support our payment by results. And the secondary use
of service is also the basis of the hospital episode statistics. Not the
hospital episode statistics, referred to as HES, has been running for really
since 1995, so for 12 twelve years, and they have always been like sort of a
mini SUS really. So we’ve been able to take those hospital episodes. We’ve been
able to link the patient records, and we’ve been able to do analysis and
reporting and do a whole host of organization not the least parliamentary
questions which come up very, very, very regularly. So we’re sort of a primary
source of answering health parliamentary questions.

And we’re sort of building on that with the upgrading of the commissioning
data sets to have a far greater sort of coverage. And we also have a mental
health minimum data set which is those patients that are covered within the
mental health trusts and systems.

MR. REYNOLDS: Monica, as we look at the time, if you could try to get
through your slides in maybe the next ten minutes. You came so far and you’re
creating so many questions, we want to make sure we have time.

MS. JONES: Sorry.

MR. REYNOLDS: No, no, don’t apologize.

MS. JONES: I think I’ve probably come with quite a lot of things to say. So
what is SUS designed to do? I think I’ve covered most of these things. It’s
pseudonymize patient-based data. It has a range of software tools and
functionality to enable users to analyze, report and present these data. It
enables linkage of data across care settings. We’re in the fortunate position
that we do have an NHS number which is assigned to everyone which is
essentially the primary case. Obviously there will be instances where the data
doesn’t have to, the NHS number where it isn’t necessarily captured because
we’re not at the point where we’ve got a national care record yet. But that’s
due to be in place by April 2010.

And we’re actually going live with our Sonic Care Record this autumn which
is literally just the basic patient demographics which is being launched
through the Patient Demographics Service, and that data will flow up and down
the Spine and is accessible by anybody who has the authority to be able to
provide care for that particular patient. So that’s across primary and
secondary care.

Ultimately attached to that will be a much broader set of data that can then
be called upon, will flow down the Spine but will be obviously all of it is
encrypted, but it would have to go through a series of protocols associated
with that in order to get the data.

So it is to ensure the consistent derivation of data track and construction
of indicator for analysis and to improve the timeliness and data. I think I’ve
already covered the governance model, the access control is very important, the
use of pseudonyms to replace identifiers is something that we are starting to
be able to do. And as I mentioned before, we’ve carried out some recent pilots,
and there’s a report that is now in the public domain that was published on the
first in January, 2007, can be downloaded, Version 0.1. So the long term is for
access to authorized users to data from the NHS Care Record Service but the
short term they can generate this from existing trust space, commissioning date
sets that will still be in clear and are still in clear and with running these
series of pilots. So in terms of what is the down side of actually doing
pseudonymization and ultimately anonymization particularly in terms of the
effect on the business processes and within your local service provider. So the
purpose of this pilot was that do we get the same results with the
pseudonymized data, what is the impact of pseudonymized data on business
processes to explore the minimum data coding standards for the use of data,
particularly for commissioning in epidemiology, to identify where the use of
pseudonymization is not sufficient to actually support the existing business
processes.

And I can give you material to let you have as to where that report is that
should give you a good idea of what we’re doing in this area.

This is just the sort of the view of the, a schematic view of what SUS is
all about. Fortunately, you’ve got the existing data flows. This is the truth
as of the 23rd of August, 2007 is that actually the forms that we
have at the moment are commissioning data sets for inpatients, outpatients and
in waiting lists and NHMVS. As of October of this year, the Person Demographic
Service and the data from Choose and Book which is an application in terms of
increasing patient choice will be linked into these data that are coming in.
They get loaded because they don’t all come through at the same time. So they
needed to be sorted and put together. Then there’s some process and validation.
They get staged, and then they go into the data warehouse.

And then there are a series of views of that data warehouse that are, the
pseudonymization actually happens at this particular point so that the data
marks are not in clear. And there are a number that are supporting hospital
episode statistics and payment by results. And then the one at the top which is
a reform that is coming in at the moment which is about 18 week waits which is
due to be in place by December of next year.

Now we’re realizing everything can actually, all the marks can actually be
housed within the main warehouse, well, not within the main warehouse, but
within the sort of BT provided sort of core. So we are actually moving to a
much more federated approach. But the big gray box around the outside stresses
the point that the security and confidentiality is consistent through access,
control and design. And so we’re able to bring other third party suppliers in
to spread the load, spread the risk and to bring their sort of specialisms to
work with the public sector to provide other analyses and reporting, in
particular clinical audit, practice space commissioning, and there will be a
number more that will come online. And these are essentially the lessons
learned so far that we realize that people were sort of being told that
everything would be in the big core warehouse and the marks would be there. But
the volume of the data and just the amount of processing that we’ve already had
to do over the last six months, it’s standing at 14 terabytes, and that’s just
transactional data that is coming through here. So we’ve got to spread the load
and spread the risk.

So the current services, the first priority is the implementation of payment
by results. This is about providing a fair and consistent basis for hospital
funding, and it’s also a pretty good way of tidying up the data because if
you’re concentrating on the enumeration and the payment, then people tend to
sort of focus their minds from a provider perspective. And so we’re seeing a
lot of increase in the quality of the data which is excellent.

Another one is practice base comparison. There’s an application that was
launched in June of this year which is an NHS wide web based comparison which
is available on line for the provision of general practice comparators and
quality data, quality outcomes framework which is essentially the framework
that was put into the new GD’s contract in the U.K., and it allows those
practices to be able to look at where they sort of fit into relation to each
other, and this is just a screen shot that shows at the authority outpatient
attendances per 1000 population and where you fit in with everybody else, and
then there’s the opportunity to drill down to PCT to practice, obviously on a
role based access control action. But it’s giving people real time data. It’s
allowing them to actually get the feedback and to be able to go first to be
able to go through an interactive process of development.

So the key issue and it’s something that you read in a lot of your
documentation as being spoken by this morning is that it’s about the data
quality. You know, whether we like it or not, secondary use of service is
populated with NHS data, and the providing commission is held responsible to
make sure that all staff who are collecting it are fully aware that it must be
accurate, consistent done with purpose.

The data quality of NHS data has been poor, been terrible in some places.
But if we’re going to be able to use this genuinely, then we’ve got to start
tackling these things because there are those quality challenges that we can’t
link the data if we don’t improve the quality and leads to incorrect financial
payments and misleading comparatives and potentially unnecessary and
inappropriate use of identifiable data.

So it is getting all of our sort of once gain our sort of house in order to
be able to provide this sort of service. And this graph just shows you an
example of that where this is a summary by strategic health authority which
shows the percentage of missing primary diagnosis where it hasn’t been coded
and it hasn’t come through. Essentially, the tariff was calculated on the
primary diagnosis. So if you can’t do that, you don’t get your money. So this
is really sort of focusing the mind and these are the sort of things that we’re
concentrating on in terms of getting the data flowing, increasing quality and
moving that forward.

Key developments that are coming up which is the commissioning data sets
Version 6 which is about 18 weeks reporting. Now I said before that our
commissioning data sets were just event driven. They would just happen every
time something happened. We’ve now introduced the concept of patient pathway
identifiers as well as patient identifiers so that you can see that all of
these events against this particular patient are actually in the same pathway.
So we’re having a concept of a future care event so that the clock starts
ticking when you send a commissioning data set. They should be at this point in
the pathway of this particular date so that the trusts can then review those
and say, oh, we’re about to breach on all of these, and we can actually do some
follow up and be proactive in terms of actually contacting patients or
contacting other people and organizations within that particular pathway to
make it more effective and efficient. So it’s a really exciting development
that is happening at the moment.

That’s due to be implemented by the first of April of next year. It’s
optional from the end of this year, then we’ll have to see how it goes. But
it’s due target is due to be here by December of next year.

And I’ve already mentioned the payment by results. And so it’s about this
bringing the standards in together but also operationalizing them and making
sure that they are fit for purpose. And this is the sort of thing, just a
screen shot of it that will come with the 18 weeks application that is very,
very simplistic here. But there will be much more complex analyses and also the
raw data will be available and downloadable for those providers who are
entitled to it that you can see straight away what the average length of wait
is, what the total elective inpatients are and where the spread is of elective,
non-elective care.

So we welcome the 12-month cycle because we’re having to react and we’ve got
a ten-year plan. But we’re operational now. It’s happening. So we’ve got to be
able to react sort of quite quickly. So these are the sort of things that we’re
actually doing. We’ve literally got sort of upgrades and testing and new
functional releases almost every month within a 12-month cycle.

But we are able to react to things quite quickly now. We are able to put
things back through the system in a very rigorous change control and release
mechanism. There are certain annual uplifts that are associated with certain
data types and data sets. So we can plan the longer term ones, but we’re able
to be dynamic and reactive to others.

And then the future plans which are essentially the addition of the clinical
data and extracts covering priority areas such as cancer, diabetes, heart
disease and renal clinical audit. Those audits are coming online next year as
part of, you saw on the diagram on the federated approach, that’s a system that
is being procured from within the Information Centre but managed by us.

Data relating to patient prescriptions, we already have an electronic
patient prescription system, data related to primary social care of patients.
And then the potential uses of the database covering all areas of trust of
England are absolutely huge, and at some point it will expand beyond the NHS
commission care and include other patient specific data, finance, support, work
force, estates and National Audit Office information.

So we’re not quite there yet. And really by 2012, we really do expect to be
in certainly the first four, I’m not sure about the fifth one.

And that’s it. Thank you very much.

MR. REYNOLDS: And Monica, do I understand you’re going to be able to spend
some time with us today. Is that correct?

MS. JONES: Yes, I’m available all day today.

MR. REYNOLDS: Okay, so as we have these other discussions, we may be able to
play off of this. So what I would ask the Committee to do is one question each
because I’m sure there’s going to be a list, and rather then run-on questions,
too.

SPEAKER: Why are you looking at me?

MR. REYNOLDS: I’m looking down the table. If you need to take it personally,
that will be fine also. All right, first I have Kevin, then I have Mark
Rothstein, then Simon.

DR. VIGILANTE: Actually, I haven’t chosen which one yet. So why doesn’t
somebody else go first.

MR. REYNOLDS: Mark Rothstein.

MR. ROTHSTEIN: I have chosen which one of them I want to ask. Thank you,
Monica. That was very interesting, and I know we all have lots of questions.

I’m wondering if you can help us with two things at the same time that we’re
working on. It’s my understanding that the NHS in England is developing or
working to develop technology that will allow patients to mask certain elements
in their health records, and maybe you can bring us up to date on where that
is.

And then the tying together question is, assuming that’s brought online at
some point, how will that affect what you do in terms of the secondary uses of
the data when possibly some of the data’s not going to be there.

MS. JONES: Okay. This is essentially a prior envelope concept, and this is
all being taken forward through the test records service and board. There is
this idea that you can, it’s really a sort of opt out process that we were
discussing sort of earlier.

Now at the moment, we don’t have an electronic patient record. Let’s realize
that that’s the situation we’re in at the moment. There isn’t the option to
mask your summary data. So that’s it, that’s even if the patient went in the
trust, then that’s the case. But it’s through this sort of potential opt out
process that patients, it’s expected that patients will be able to say, well,
I’m happy for that data to be in the system, but I don’t want it to be
available and accessible.

I don’t know the absolute fine details of where we’re at with that
particular subject program, except for the fact that we are moving towards a
care records service, and we’re bringing that along.

MR. ROTHSTEIN: Are you saying that the opt out will go to whether the
patient wants his or her entire records available, or a subset, for example,
substance abuse, mental health, other sensitive information?

MS. JONES: Because we’re doing it on a piecemeal sort of basis, then the opt
out is against particular subsets.

MR. ROTHSTEIN: Okay, thank you.

MS. JONES: And that’s part of the reason for doing it in this way so that we
can actually target it.

MR. REYNOLDS: Simon.

DR. COHN: It sounds like Mark has a little letter on his mind. Maybe I have
a question maybe a little more central to what we’re talking about here.

I wonder if you’d talk a little bit more about pseudonymization, and it
sounds to me like you’re more contemplating it than doing it. We are sort of
aware in this Committee that everybody uses anonymization, pseudonymization,
whatever somewhat differently. And I guess I’m sensing that your
pseudonymization is basically just – actually I’m not sure what it is, so
I’ve asked. But where do you see its purpose and usefulness is in terms of the
overall construct of what you’re doing?

MS. JONES: Our view of pseudonymization is essentially taking out the
absolute patient identifiable item which is the NHS number and all the ability
to identify somebody through sort of what’s referred to as fuzzy logic which is
taking the gender, the post code and the derived age of an individual which is
the way that we identify people if we haven’t gotten an NHS number for them. So
there’s only really identical twins living at the same house who will get
through that fuzziness.

And it’s taking that out having, holding the keys in a separate data
repository and having a mapable key that is non-identifiable against the
record. So that’s our pseudonymization. Anonymization is basically just taking
out any of that, but can only really apply to aggregated data. So that’s, and
the process for doing this is, like I said earlier, we are still exploring it,
and these pilots that we’ve been running over the last six months to a year,
and we’ve got a third pilot to come along is saying because we have to be able
to provide the identifiable data back to the providers so that they can match
it to their local sort of systems. And those are some of the difficulties that
we’re trying to reconcile at the moment that if it’s coming in through two
different parts or it has to go back out by two different routes, how do we
link those back again?

And I think it’s safe to say that we’re having, you know, there are a lot
more implications that people haven’t necessarily sort of thought of when the
whole concept of pseudonymizing because a lot of patient identifiable reporting
and analysis is done at local secondary uses within sort of trusts and also
operationally being able to use this to manage services.

DR. COHN: I guess you have a report so we can look further at sort of what
is happening with this pilot. So I’m just trying to figure out what it is that
you’re thinking about using pseudo, now that I understand your pseudonymization
is sort of like what we talk about is pseudonymization, I just was just trying
to figure out whether you were, what purposes you were thinking of using that
data for. And it sounds like you’re trying to play around with operational
reports, and maybe not, that’s not so helpful, but maybe it has problems. All
these other reports of quality and research utilization, is that what you were
going to be using this pseudonymized data for?

MS. JONES: That’s definitely what it’s aimed to be used for because the
secondary uses service is very much about the output coming out here for the
end users, and it’s ultimately being able to do the sort of web based
applications for practices and PCTs and strategic health authorities. But it
also down here is extracts for non-NHS organizations.

There will be context there where anonymized data in its aggregated form is
just not granular enough for that kind of analysis to take place, and therefore
the ability to have a pseudonymized version to be able to push out
predominantly in research is something that we’re really trying to do.

MR. REYNOLDS: Is there some part of your document you could direct us to,
you just used two terms, local secondary uses and operations.

MS. JONES: There isn’t a sort of formal –

MR. REYNOLDS: Can you work one up before you leave?

MS. JONES: Yes, local secondary uses is not really a formal definition, but
it happens all the time. I have actually got another presentation that I’ve got
that sort of tackles some of those direct use, the local secondary uses and the
NHS wide secondary uses on a graph that I could put in this, but I just thought
it would bog down in it.

MR. REYNOLDS: No, that’s fine.

DR. VIGILANTE: I was just wondering in the same similar vein that Mark was
going down. So you give an example before, somebody comes in, gets an imaging
procedure and is going to be put in your PAT system, and at some point there’s
some interaction with the patient where somebody says to them, you know, this
could be anonymized or pseudonymized and used for other purposes, is that okay
with you. Is that, who does it? Is it explicit as to who ought to do that, the
doctor versus the technician versus somebody else? Is it done verbally, or is
there a piece of paper that you give somebody to read and say, you know, read
this, figure it out, sign it and then like we do with HIPAA. Nobody really
understands what we’re doing, but we feel good about it.

MS. JONES: No, it’s done verbally. It’s done verbally by the clinician at
the first point that this actually sort of happens.

DR. VIGILANTE: Right. Every time an image is taken or –

MS. JONES: No, just the first time.

DR. VIGILANTE: So if you’re okay the first time, you’re good for life?

MS. JONES: You’re good for life, but you’re given the details about the
guarantee.

DR. VIGILANTE: Right.

MS. JONES: You’re given the patient leaflets, your appropriate language in
audio, and you’re told that at any stage you can come back and you can opt out
and there’s a way to address them.

DR. VIGILANTE: You sign something?

MS. JONES: It’s recorded on the record.

DR. VIGILANTE: In the doc reports.

MS. JONES: Yes.

DR. VIGILANTE: And how do clinicians feel about this? Do they get a lot of
push back on that? Is this like too much they’ve got to be asking people, or is
it not –

MS. JONES: No, I don’t think so because we’re doing it as each application
comes on line. So we’re not just going, and another thing, can we just go down
the list. So it’s very much in the context of what is happening.

DR. VIGILANTE: And is there standardized language that they sort of say you
must say something like this, or does everybody kind of wing it on their own?

MS. JONES: No, there’s a standardized, it’s almost a brief that is given to
them, and it’s based on the care record guarantee. So it’s very, very clear
direction, and it is absolutely standardized.

DR. VIGILANTE: Right. Okay.

MR. REYNOLDS: Marc Overhage.

DR. OVERHAGE: I guess I do have a lot of questions, and thank you very much.
I told you before the meeting I’ve had a chance to look at your website, but I
found the presentation very helpful in pulling in some of the details.

One of the things I’m still not quite clear on is the commissioning data
sets. If I understand those and I probably have it wrong, essentially these are
data sets that have been developed, selected, agreed to with some oversight by
this ROCR Committee and others perhaps that essentially you require be reported
from each of these different settings that help build the data sets to become
useful for the kinds of secondary uses that you have.

MS. JONES: Yes, absolutely. When the commissioning data sets of which there
are 19, there’s, I don’t know, about three or four in each sort of setting, and
there’s not very many data items within that, and each time quite a lot of
demographics information is collected. So at the moment they’re not as
efficient and effective as it could be.

But as we move to linking the records, and we won’t necessarily have to
transmit those as often, they’re approved through a very rigorous development
process and they only become an NHS standard once the information standards
board has actually approved them. But it’s sort of my department within the
Information Centre that develops those and maintains them and supports them.

And yes, it’s very much the sort of event that is happening to that
particular patient at that particular time. But because they’re monthly, they
only flow monthly at the moment. So there are essentially, there’s an aggregate
return for all of inpatient attendances for this particular hospital for this
particular month, and then they’re sent through.

It used to be entire nationwide clearing system which we switched off as of
the 31st of December last year, and it now goes into the secondary
uses service. But as you see on this particular sort of diagram here, this
ease, extracts and reports to all the PCTs, trust and NHAs, they can download
their view of the data at any time. So they’ve got online access to download
those as a commissioner.

But at the moment and particularly because we haven’t really gone to fully
pseudonymized or anonymized data, there’s very limited access, and it’s really
only the providers who are able to get the data out at the moment until we’re
absolutely certain that we’ve got it right on the pseudonymization.

MR. REYNOLDS: Bill Scanlon?

DR. W. SCANLON: In terms of the warehouse and you talked about sort of non
NHS users, have you identified a group or types of users that you wouldn’t
given data to because in our context we’ve at various times talked about
commercial uses without defining that and thinking that it’s different than
research, it’s different than public health.

MS. JONES: There are sort of three classifications of the sort of users.
There are those who are referred to as the NHS families, so it’s actually the
NHS, and the expectation is that they will have obviously all access, they will
have access to all of the data.

There is then a sort of next level that is sort of non-NHS, but public
sector. So it’s very much our public health observatories, our registries, the
broader public sector sort of support mechanism for the health and social care.

There is a slightly more rigorous approach for them to have to go through,
and you have to set yourself up, you have to become a registration authority.
You have to be able to access the system by our entering network. You have to
have your, be cleared for your check and pin card to be able to get these data,
and that process of becoming a registration authority is the mechanism for
doing that.

And then there is the rest. That does include commercial providers. There is
no restriction at the moment as to who will or who won’t be able to have access
to this. But it is at the moment is being considered on a case-by-case basis
for that third group in terms of who would have access, what their purpose is,
and the associated –

DR. W. SCANLON: Is there a history here yet of having turned down a number
of applications, or –

MS. JONES: I don’t know the exact detail. I think a lot of people are
probably asking. There’s probably a lot of people that have been told that
we’re not quite there yet even to be able to consider your application. But we,
you know, there are quite a few particularly public-private partnerships, and
we even have one within the Information Centre with a company called Doc Foster
Intelligence. They get regular extracts, and they provide a sort of certain
portal and UNHS choices website which has been launched in the last sort of few
months. Doc Foster Intelligence who are a private company, although their
associates with Imperial College London and ourselves at the Information Centre
are working very much within that sort of public-private partnership. But
they’ve got the expertise, and they can do the marketing, they can do all that
stuff. We’re not really very good at that. So we work very closely with the
private sector as well.

MR. REYNOLDS: Mike.

DR. FITZMAURICE: Thank you very much, Monica, for coming over here and
sharing your experiences. It’s good to have another country that has gone
through some of the same things that we’re going through.

I want to follow up on Bill’s question with a separate question of my own
and get more specific. If I were a pharmaceutical company wanting to use your
data to determine which practitioners are or are not using my drugs for their
patients, would that be an appropriate use of the secondary –-

[Teleconference automatic interruption.]

DR. FITZMAURICE: That is, do you share this data with pharmaceutical
companies today. They may ask for the number of prescriptions or a drug for
specific practitioners, or they might ask for the number of diabetes patients
by specific practitioners so they could follow up with a diabetes drug. Is that
an appropriate use of the secondary data?

MS. JONES: Yes, it is. And I mean, there is a precedent set through the GPRD
which is the General Practice Research Database which is been up and running
for about ten years or so, and that’s actually managed through an NHRA and also
the Medical Research Council.

That would only be in sort of a subset. Once again, a sort of a mini-SUS,
and some of the main users of GPRD has been pharmaceutical companies. So there
is an expectation that the secondary uses service will provide that service as
well, will provide the whole sort of coverage. I think GPRD has about a
coverage of between 5 to 10 percent obviously. So it would cover the lot.

MR. REYNOLDS: Okay, Justine, then Tom and then break.

DR. CARR: Monica, thank you so much. This is interesting. I have a question
about the office encounter, and I want to make sure I’m understanding
correctly.

When a patient is seen by a provider, there are 19 data sets that might
apply within which they might ask questions and record the data. I guess what
I’m requesting is, you said we don’t have an electronic health record, and so
what we’re talking about these data elements are questions that are asked of
each patient who might have one of the target conditions.

MS. JONES: Well, the CDS are really not even necessarily true as sort of a
direct verbal contact with a patient. So it could actually, the elements within
that can be just the fact that you know that you’ve got this person sort of
blocked in, and they’re coming for an elective treatment, and that these are
essentially the things that sort of happen to them. So it’s not necessarily an
interaction with the patient.

DR. CARR: So maybe you, I guess I’m thinking about the fact that you reduced
the work of the physicians 11 percent in the first year.

MS. JONES: Yes.

DR. CARR: So what is it that they do, and how long does it take, and what
does an 11 percent mean?

MS. JONES: The 11 percent was really about reviewing the existing
collections, reports, surveys and audits and saying, right, where is the
duplications and having a view of the whole lot because prior to the
Information Centre being set up, nobody had actually done a sort of an
overarching sort of view in taking responsibility for all of these sort of
collections, and things had been springing up all over the place, and nobody
had been regulating them.

So the reduction in burden at the moment is about us saying do you know that
actually ten of your 11 data items are covered by this particular sort of
return, or this is actually captured by eight items that are really
commissioning data sets that flows and is mandatory flow that is captured with
a patient administration system. Do you really need those three additional data
items, what is the purpose of those, what are you going to use them for. And
they go, well, we definitely need those. Okay, well just capture the data. And
what is the best way of actually capturing data, or, no, we don’t really need
those. Okay, well, we’ll use that particular data set, and I actually just
switch the other ones off. And it’s about that rationalization of having the
overall picture, what is anything that is flowing through the systems, what are
people being asked to do and saying, right, okay, we have taken responsibility
to reduce that burden to stop this unnecessary sort of waste.

DR. CARR: Thank you.

MR. REYNOLDS: Simon?

DR. COHN: Monica, again thank you. One of the great privileges of the Chair
is you get to ask the last question before the break.

You had mentioned, and I’m just trying to sort of put the various pieces
together, so maybe you can clarify for this meeting. You had talked about three
uses or three groups in terms of uses of the data warehouse, and I presume
there’s a party that’s local probably as well as national in terms of how
people are handled.

And you talked about a case-by-case basis decision. And then earlier you had
also talked about Caldecott guardians, and I’m a little vague even about the
Caldecott guardians, but probably a national and local sort of model that sort
of has, there’s something in all of this stuff. Is the use of the Caldecott
guardians and all of this to help identify this case-by-case basis decision?

MS. JONES: Oh, absolutely. The case by case would have to have, if they want
to have patient identifiable approval, they would have to have Section 60
exempt from PIAG. They would have to have Caldecott guardian approval. It would
have to have Ethics Committee approval. It’s the same as what happens with
clinical trials. So it’s looking at that particular sort of context, and it
would have to have literally sign off by the Caldecott guardian for that
particular sort of data, or it would have to, if it was a national one, it
essentially has to have a group decision by the Caldecott guardians that this
is a suitable use of the data, and this is a trustworthy and reliable
organization that could use it.

So that’s, there is a very strict sort of given structure, but it is done on
a case by case sort of basis. And so, therefore, it can’t be just done in a
wholly objective sort of way. There’s got to be a certain amount of
rationalization, I think, associated with it because we haven’t done it enough
to know that we’ve absolutely got it right. I think that’s the, and
particularly with the sort of non-NHS users, we’re just preliminary, but we
don’t want to close any doors because the whole purpose of making it all better
to have a secondary uses service is that we’re going to really sort of make a
difference and give people access.

DR. COHN: All I’m presuming you are talking about, and let me see if I can
describe this one. Obviously, these people are probably not generally getting
access to straight personally identified health information. Sometimes they’re
getting, I mean, does this change in the world of pseudonymization or do you
think all of the same principles apply?

MS. JONES: I’m not entirely sure what –

DR. COHN: Well, I was just trying to think of the, I mean, on the one hand,
I can see very rigorous standards if you’re looking at personally identifiable
information.

MS. JONES: Yes.

DR. COHN: But I’m actually wondering does this change at all in this world
that you’re contemplating either de-identification, pseudonymization, or is it
absolutely the same thing?

MS. JONES: It’s the same thing, but there are certain steps that won’t
necessarily have to be sort of met, but the checklist will include all the
same. It’s just that if you’re asking for anonymized data, you don’t need PIAG
approval. Then that’s a sort of cross or tick against the PIAG bit. But the
overall checklist under rigor is the same.

MR. REYNOLDS: Okay, excellent, thank you. We’re glad you’re going to be
sticking around. I’m sure you’ll have friends at every break and lunch. But
with that, we’re only going to take a ten-minute break since we’ve got a tight
schedule this morning. So back at 10:40 per the clock to the right. Thank you.

(Break)

MR. REYNOLDS: If the presenters wouldn’t mind joining us on the other side
of the table here. Sean Flynn, Steve Labkoff, Micky Tripathi and who’s going to
be presenting from Manatt Phelps and Phillips?

MS. MURCHINSON: Julie Murchinson is on from Manatt.

MR. REYNOLDS: By phone, okay.

MS. MURCHINSON: Julie Murchinson is on from Manatt.

MR. REYNOLDS: Micky, are you on the phone?

MR. TRIPATHI: Yes, Hi, Micky Tripathi from Mass eHealth Collaborative.

MR. REYNOLDS: Why can’t we hear the phone better? Could you say something
again, please.

MR. TRIPATHI: Yes, Micky Tripathi.

MR. REYNOLDS: Yes, that’s great, yes, thank you.

MS. MURCHINSON: And this is Julie Murchinson from Manatt.

MR. REYNOLDS: Oh, okay, good. So we have a large panel. So if each of you
would keep your remarks crisp, that would be great so that we would have enough
time in the end to ask questions. So I’m going to, unless somebody tells me
different, I’m going to go right down the list in order. So with that, the
first presenter, and we need a microphone for him, please, and that would be
Sean Flynn from the Program on Information Justice and Intellectual Property
from American University. So, Sean?

Agenda Item: Health Data Protection Solutions Needed in
HIE

MR. FLYNN: Okay, thank you for having me here today. My name is Sean Flynn.
I’m a professor at American University, Washington College of Law, and I run a
program called the Program on Information Justice Intellectual Property, and I
also serve, and I guess my role today is I serve as counsel to a group of
public interest amici in a case involving a challenge to a recent law that was
passed in 2006, the New Hampshire Data Privacy Act, which limits the ability of
pharmacies and PBMs and other entities to transfer to pharmaceutical companies
and health information organizations patient and prescriber identified
prescription data for marketing purposes. The law, and we can get into this a
little more detail, allows the use and transfer of such information for
non-marketing purposes including educational purposes and to do studies,
research, et cetera, but it doesn’t permit that data to be used or transferred
for commercial marketing purposes, specifically pharmaceutical products.

The group that I represent includes the New Hampshire Medical Society, the
group of doctors that petitioned for this law to be passed, AARP and other
patient rights groups and collections of state legislators that are considering
similar legislation in other states.

So I think what I would like to do is describe briefly what some of the
concerns that prompted the legislation are. I’m not going to get in detail
about what the legal opinion is and what our arguments are in response, but I’m
happy to take questions on that if anybody feels like being a lawyer today, and
I know there’s one in the room with us.

And then briefly discuss, there’s been a couple different laws that have
been passed in other states, specifically Maine and Vermont that have similar
goals to the New Hampshire legislation but have taken different vehicles. So
I’ll briefly describe what those are and hopefully not take up too much time in
that endeavor.

So as I mentioned, New Hampshire is the first law in this country to attempt
in some means to regulate the prescriber identified portion of prescription
data train. Of course, HIPAA already regulates the patient aspect of that.

The New Hampshire law did add an additional state cause of action for the
trade of patient identified information as well out of the belief that HIPAA
doesn’t have sufficient remedies, and that adding state remedies to that
already federally prohibited activity would be helpful in the state.

A brief note on the history of this practice which you may know all about
given the topic of what you’ve been studying. But basically the practice that
we’re talking about today starts in 1994 basically. That’s the year that IMS
released its latest iteration of what started as a sales force tracking
mechanism. So pharmaceutical companies have been tracking their sales forces
and trying to measure what doctors are prescribing in various kind of
aggregated ways since really about the 1940s, 1950.

But that was done usually through surveys and samples that didn’t
individually track every prescriber and their individual prescribing habits and
attach doctors’ names to those prescribing habits.

The first we really have a full data set to do that was 1994, and between
1994 and today there have been a large number of other companies who have
entered this area. Some of them in subspecialties, some of them competing
directly in IMS in the kind of broad range of pharmaceutical prescribing
habits.

The reason for that, of course, is the digitization of prescriber records
pushed by the entry of pharmacy benefit managers into the chain of distribution
and compensation for drugs. Today, PBMs digitally manage about 95 percent of
all prescriptions, so roughly 95 percent of all prescriptions are transmitted
through some data set that can be easily sold and transferred to other parties.

Previous to that, of course, you know you had actually handwritten
prescriptions. It was very hard for pharmaceutical companies to track anything
like 95 percent of the prescriptions in the country. But that’s now possible.

So in 1994 and since, there’s basically been a relatively unregulated
exchange of information between pharmacies and other peoples within the
prescription chain and pharmaceutical companies through health information
organizations and as intermediaries.

So the state today is that pharmaceutical companies can receive from various
vendors detailed computer-generated statistics on pretty much every prescriber
in the country, exactly what they’re prescribing on a day-to-day basis. There’s
a quote in the record of the New Hampshire case that essentially says that a
detailer can walk into a doctor’s office at nine o’clock in the morning, and at
twelve o’clock in the afternoon in the same day figure out whether that doctor
prescribed the drugs that was being pushed by the detailer in that transaction.
That is, in itself, as you can imagine, presented a real problem of undue
influence from pharmaceutical marketers towards doctors.

A pharmaceutical marketer can walk into a doctor’s office and know exactly
what that doctor’s prescribing, if they’re prescribing for instance a generic
medicine for a specific ailment, and they want to push a newer branded product,
they know exactly what that doctor’s prescribing, how much to what patients.
They know what mix, they know what percentage is branded and non-branded. There
is information in the House Oversight Committee from the Merck investigation
that showed that they actually came up with detailed ratings of doctors from an
A+ to a D on how much Merck percentage of product in every single ailment that
Merck treats, what percentage Merck versus non-Merck products that doctor was
prescribing.

Now that allows pharmaceutical marketers to walk into that doctor’s office
and really tailor messages specifically towards what that doctor is prescribing
and specific critiques and presentations of whatever data is out there to
attack the generic medicines, for instance, or to attack whatever else they’re
using. And it’s an advantage that branded pharmaceutical companies have that
generics don’t because generic drugs don’t have the same financial incentives
to send individual detailers out to target individual prescribers in this way.
So it creates a certain undue influence within the marketing of prescription
drugs.

Additionally, that information can be used to target gifts and compensation
and speaking engagement invitations, et cetera to those doctors that meet what
Merck called the A+ doctors. The more you prescribe, the better you reach out
and prescribe the specific pharmaceutical company’s targeted medicines, the
more compensation through gifts, meals, et cetera those pharmaceutical
companies can shower on doctors.

And we know from popular press that some doctors receive tens of thousands,
even hundreds of thousands of dollars a year. They’re the primary targets of
pharmaceutical marketers. Now those gifts, a lot of that gift giving happens,
of course, absent the data. But the data allows an extremely improper degree of
influence to be linked to those gifts because pharmaceutical companies can
specifically observe and reward prescribing behavior. In effect, you have
doctors that can be incorporated into the compensation chain of the
pharmaceutical marketing companies, and that is very troubling to many of the
doctors’ groups that I represent.

So let me just talk a little bit about the rise of the backlash. As I
mentioned, these data systems really came about in 1994. Between 1996 and 1998,
there was a huge backlash first in Canada, then in Europe. This practice has
been banned, the specific prescriber identified tracking of prescription
records have been banned in several Canadian provinces and in all of Europe. In
those countries, health information organizations can still measure and track
prescriptions, but not patient identified. They can track them regionally. They
can track them in blocks. They can track them in specialties. They can still
figure out how they’re doing vis a vis competitors, but they can’t track the
individual prescribing behaviors of individual physicians. That’s the new thing
that’s happened in the last decade or so, and it’s becoming a peculiarly
American phenomenon.

In the U.S., the story really broke with the front page story on the New
York Times
in 2000, and there’s been a series of large articles and
national papers describing various levels of physician outrage at this practice
since then. One of the reasons for physician outrage in addition to
occasionally being told by pharmaceutical marketers that they haven’t lived up
on their commitments to the marketers and being informed that their individual
prescribing behaviors have been tracked, an additional concern – well,
excuse me, let me fast forward from that a little bit.

So numerous physician groups around the country have acted to try to limit
this data. It’s happened in local medical association resolutions, and it’s
happened in AMA through several resolutions that have been attempted to pass.
The AMA, however, sells its physician data file to pharmaceutical companies for
a cost of about $40 million a year, and so it has not followed through on
various resolutions that have been pushed through the AMA to try to propose
federal legislation on this issue which is one of the main reasons we see the
main action going on in states.

So as I mentioned, New Hampshire was the first state to act, and there’s a
case going on right now. A district court has held that the New Hampshire Act
is unconstitutional on free speech grounds, and that case is currently going up
into the First Circuit.

And there is essentially five interests that the state is representing to
the First Circuit that lie behind this legislation. So the first, as I
mentioned, is to curve undue influence within pharmaceutical marketing, the
one-sided nature of marketing in this area because of the lack of incentives
for generics especially to have counter-marketing efforts and the extremely
high cost of states to mount their own counter information campaigns.

So you have a situation in which most doctors are getting for most drugs
heavy marketing on one particular drug but have no real counter messages unless
they reach out on their own to survey what the marketers are giving to them,
the information that that marketers are giving to them.

Second, of course, is just the cost and health impacts of this system, of
the undue influence within the system. So there’s been about a fivefold
increase in drug spending amounts over the last 12 years or so, and studies
have shown that about a third of that increase has been marketing induced
shifts of prescribing behavior from cheaper drugs to more expensive drugs.

Now we can’t know exactly how much of that one-third is inappropriate
shifting or appropriate shifting. But a sizeable amount of shifting has been
demonstrated in various studies from cheaper, often more effective generic
drugs out there to newer, more expensive, but often less effective treatments
for the sale ailment. So that’s the cost linked with the health impacts. The
shifting is going on shifting prescriptions towards medications that are often
worse for patients and also cost more, both of which imperil the health system.

Another issue of particular physician group issue is standards in the
medical profession. The ability to use data to incorporate doctors within the
compensation schemes of pharmaceutical companies, the ability to observe and
reward prescribing behavior threatens the ethics in the medical profession, and
the more it becomes public to patients, threatens that bond of trust between
patients and doctors, the trust that a doctor’s prescribing something for the
patient because it’s the best for the patient, not because they’re getting more
gifts and speaking engagements because of that practice.

Third is just the rise in vexatious sales practices that have coincided with
the ability of pharmaceutical companies to track individual prescribing
behavior. So over the same period of the last 12 years or so, the data mining
has become widely used to target prescription marketing efforts. The amount of
detailers in the country have doubled. We now have over 100,000 individual
detailers in the country.

The average primary care physician receives 28 visits from the detailer a
week. Now if you can imagine yourself in your consumer mode what kind of
lobbying you would be doing on the federal and state level if you received 28
marketing phone calls in your house a week, you’d try to ban the practice
that’s leading to it, right.

The pharmaceutical industry, as you know, spends about $27 billion a year
now on marketing. That number has gone up about two or three times in the last
several years, and 85 percent of that is targeted directly towards doctors.

And finally, I just want to hit on patient privacy. These physician records
are patient de-identified. However, that doesn’t mean that marketers don’t know
exactly what prescriptions a specific patient, not by name, but they actually
do often track specific patient records by number, and then they track the
prescriptions on that patient over time.

So they can see whether you, a specific patient walking into a doctor’s
office, they don’t know your name and address, but they have you identified,
and they know whether you’ve shifted your prescriptions, for instance, towards
a generic. And if they can see that information come into the doctor’s office
and target that doctor, switch back to the brand, your medical treatment is
being specifically targeted for marketing without your knowledge regardless of
whether your name is mentioned or not.

So there’s a patient privacy issue in these laws and in these issues, even
though the patient specific names are not identified in the practice.

As I mentioned, there’s a lawsuit going on. There’s an appeal. There are
several states that have acted since the District Court has handed down, and
I’ll just mention three things that are going on.

First, Vermont has adopted a law only allowing the prescriber identified
records to be traded if the doctor specifically opts in. So on their medical
licensing information, there’s a box and a doctor can choose to check that box
and allow their prescriber identified information to be traded.

Maine has adopted an opt out provision. Also on its licensing materials, it
permits a doctor to check a box and opt out of the trading of its information
to pharmaceutical companies.

Now there’s a problem with both of these laws which is that it doesn’t
actually directly address the state’s overriding concern in reducing the undue
influence of marketing and allows doctors to basically opt in to this kind of
compensation system that is a record that is influenced by the data.

And finally, a couple states are considering, none have passed, legislation
to regulate the detailers themselves, to regulate the messages that they can
bring, to regulate deceptive and misleading advertising and to require the
detailers to be licensed professionals instead of basically sales forces.

So those are the other options that are out there, and I’ll stop there and
allow the responses from my panel members and hopefully some time for some
questions.

MR. REYNOLDS: Sean, thank you, and it’s been requested that, Julie, you go
next since we’re talking pretty much about the same type of thing. So is that
okay with you, Julie?

MS. MURCHINSON: Sure, that’s fine.

MR. REYNOLDS: Okay, please continue.

MS. MURCHINSON: Okay, so I will try to stick to the presentation and the
slides that you all have in front of you as opposed to rebutting directly on
what Sean was saying. But I will acknowledge a few things that he did say along
the way.

Manatt Phelps & Phillips is a law and consulting firm that has been
working with and representing private sector companies and membership
associations in this debate and, I think, pretty much on the opposite side of
the testimony you just heard.

This has allows us to really do a lot of state surveys and activity on this
issue across the country to analyze specific state laws in many of the
contentious states, to evaluate the impact of these laws on health information
exchange efforts since we do a lot of work in that area as well. And given the
work we’ve been doing on some of the HITSP, the privacy and security work from
ONC, we’ve also had the opportunity to evaluate this law in the context of that
privacy and security effort to really understand what the potential impacts
could be.

So today I’ll talk about the motivation behind this state data restriction
movement. I’ll try not to duplicate what Sean just said. I’ll talk a little bit
about the goals for the New Hampshire law and what some of the unintended
consequences could be for not only New Hampshire citizens but also for others
across the country, and discuss a little bit about some of the efforts that we
see starting to address this situation, but clearly more is needed to really
address the issue.

So I’ll start with the slide titled what is behind the state activity, I
believe it’s slide three. And is someone turning slides there?

MR. REYNOLDS: Yes.

MS. MURCHINSON: Okay, great. So as Sean mentioned, this is clearly a
physician driven concern, really highlighting issues around the budget for
controlling drug costs. Clearly, public perception around pharma marketing
activities and really raising questions around the data privacy. There’s been a
significant lobbying movement behind this that has been fueling efforts across
the country and really addressing this issue in almost half the states in the
Union.

But for the most part, this is really a very kind of pharma movement that,
you know, we’ll highlight may or may not be acknowledging a lot of the other
unintended consequences of this action.

So, next slide, slide four. So as Sean mentioned, the purpose is to really
protect citizens, protect privacy as patients and physicians, which is
interesting, and to lower health care costs. And the proposed law really does
focus around this prescriber identifiable data. However, it is our opinion from
looking at the legal aspect of this that the law is written in a vague way to
really potentially impact more than just prescriber identifiable data, but also
the impact on prescriber identifiable data is not insignificant.

I will just stop here to mention I am not an attorney. I am a consultant who
works with a number of attorneys on this issue. So any questions that might get
into the law may not be my strong suit, but we’ll work through that.

So the next slide, on slide five, from our perspective, this is a game
changing issue for a number of reasons. Because of the significant effort
behind this at the state level and almost half of the states that have analyzed
this, there’s a potential to start to pass laws that impact privacy in a
significant way and would create a patchwork of privacy and data restrictions
across the country that would limit our ability to have interoperable privacy
policies and procedures.

On the patient privacy front, we really feel that the patient de-identified
data that’s being discussed here really poses no threat to patient privacy. So
this is not a patient privacy issue. However, since HIPAA doesn’t preempt state
law, that’s an issue. And HIPAA specifically exempted patient de-identifiable
data for a reason. So we don’t want to be advancing laws in specific states
that start to compromise that previous decision.

One of the major concerns here is that the New Hampshire law is starting to
really advocate for physician privacy and providing rights for physicians to
not have others seize the data for the jobs that they are doing and not have
them be evaluated for how they are doing in caring for patients. And this is
not necessarily just about whether or not pharmaceutical companies can see
their prescribing patterns and potentially employ tactics to address that. This
is about how the entire health care system and those people who are appropriate
to see physician behavior and physician prescribing patterns have the ability
to do that. So we believe that a law like this creates for really the first
time a physician privacy platform, and that’s definitely not necessarily a
productive thing for health information exchange.

And mostly because this is really about marketing concerns, you know, this
is game changing because it’s taking a very different attempt at marketing
concerns that frankly are starting to be addressed in many other ways. You
know, there are other states in the Union who have passed legislation requiring
manufacturers to report the amount of money they spend on marketing and to
register their sales reps with the state. There have also been laws passed to
ban manufacturers from providing physicians with gifts, and the pharma industry
has also started to basically self-impose or self-regulate their marketing
activities through a code of conduct.

So there are some efforts going on across the pharmaceutical industry and
certainly at the state level to try to curve some of these marketing concerns
that are really the core basis of this argument and not necessarily take the
kind of approach being taken in this law in New Hampshire to potentially create
problems with privacy law interoperability.

So slide six, Manatt works with a number of different plaintiffs including
IMS and Verispan to really pull together an amici brief made up of amici who
are representative of pretty highly notable organizations that are working to
improve health care through the use of health care information technology and
through quality improvement efforts with the goal of looking at value-based
improvements in the health care field.

So these groups include the eHealth Initiative, NAHIT, Surescripts,
Washington Legal Foundation, Wolters Kluwer, and the Coalition for Healthcare
Communication. There are others who have also even expressed interest in the
continuation of this amici work in the New Hampshire appeal. So this is really
starting to take on a very interesting collection of organizations that see the
danger being traded by this kind of law.

Next slide, please. The main amici concern is that the goal for the health
care field overall is to be able to monitor physician activity and to actually
be able to reward physician through performance mechanisms. So although Sean
highlighted the way in which the pharma – he perceives the pharma industry
looking at that type of monitoring and rewarding mechanism, that’s in fact the
very mechanism that the health care industry is looking to use not just at
physicians but how devices and technologies are performing on individual
patients.

So the amici are really looking at that as a goal in saying that this type
of law that restricts use of prescriber-level identifiable data really could
put at risk the quality monitoring activities, a lot of clinical research
activities, certainly our public health surveillance activities and post-market
drug surveillance if we don’t necessarily have the information at the
prescriber level. Slide eight, please.

So as Sean mentioned to a certain extent, the initial ruling was that the
judge said that the law really improperly restricts commercial speech, and
there are a number of things that I think are important to highlight about what
the judge said in the first round of this.

Essentially, basically there is no evidence of any kind of coercion or
intrusion of doctors. So the evidence that was brought there was not
significant. The state case did not show that the law was, failed to show that
the law would promote public health. So there was a very weak connection
between how this law would really help improve the health of New Hampshire
citizens.

The law failed to show appropriate controlled health care costs through this
type of mechanism as well, and I think most importantly, the ability to control
health care costs without compromising patient care. So that case was not
strong enough.

Furthermore, and interestingly enough, the state’s experts acknowledged that
the pharmaceutical detailing practices actually can provide public health
benefits. So even though there might be a public perception that that’s not
necessarily appropriate, in many cases it does provide public health benefit,
and that’s something that we should all be striving for.

Lastly, the state noted that there really are alternatives to this kind of
law, and I think this is probably one of the more important points to take
action on, that there are alternatives out there, and some alternatives are
starting to be employed, and that this may not really be the best path to
accomplish the goals that are being brought by this law.

Okay, slide nine, please. Just to highlight a few comments I made earlier.
This has not been an insignificant effort at the state level to get several
states in the Union to propose similar legislation. Between 2001 and 2006, 17
states introduced bills to restrict physician prescribing data, and 2007 more
than 20 states considered the data restriction.

At this point, 19 states have restricted the legislation to date, and
Massachusetts is scheduled to consider legislation in September. As you also
heard, Vermont and Maine have been pretty active in this area. Maine passed a
law extending the current state prohibition of the sale of prescription drug
information in June of 2007, and, as mentioned, the law includes an opt out
provision for prescribers that can be designated when renewing their license.

And also in June, an active month on this topic, Vermont also created a new
prescriber data sharing program requiring a prescriber to opt in or get consent
for his or her identifying information to be used for the purposes other than
pharmacy reimbursement and some of the other regulatory purposes. So both of
these laws take effect January 1, 2008.

So in all, you know, three states have been very active in this. Over 20
states have considered this. The story is certainly not over. The story’s
definitely just beginning, and I think that all eyes are on New England,
frankly, to see the direction in which this law goes. Slide ten, please.

So I don’t have to educate you, but as many of you know, the efforts that
are going on to create not only technical interoperability between and among
health care stakeholders in our system are also striving to create some sort of
policy interoperability so that information can flow for the benefit of the
patient and be consumer centric. And although this is a very consumer centric
movement, the ability to actually take action and improve care for consumers in
America, this type of model was really predicated on understanding what’s
happening at the point of care, and this point of care activity not only just
includes prescribing behavior but frankly prescribing behavior is one of the
more important aspects of what’s happening at that point of care with a
patient.

So we really believe that this law puts at risk a lot of the good efforts
that are going on in this movement. Slide 11, please.

We tried to highlight on the slide kind of two points. On the left hand part
of the slide, patient de-identifiable data is being used today or could be used
in the future in very beneficial ways to really look at population health on
the whole and put into place some improvements or mechanisms that will help
move population health in a certain direction, looking at system efficiencies
and certainly looking more at institutional level performance.

However, the real goal of what the health care agenda is today is really
trying to get towards a more personalized health care environment. And that
relies on the use of this provider identified yet patient de-identified data to
really be able to address some of the healthcare safety and post-market
surveillance issues and to modify the way in which our reimbursement system is
working today and start to turn that into more of a performance based system.

So, again, it’s a lot of what we’re trying to achieve is predicate on
understanding how our clinicians are helping our consumers today and
prescribing behavior can’t necessarily be treated differently than any other
monitoring we might be doing at that point of care.

And lastly slide 12, from our perspective, there are a number of contracts,
efforts, movements, associations, industry engagements that is really helping
in this area. But it’s really just the tip of the iceberg. The HISPC efforts
are clearly focusing on what patient privacy should look like, does look like
and how to achieve more of that policy interoperability, and I think it’s
making good progress in doing so.

On the industry side, we are definitely seeing industry leadership in terms
of looking at more responsible data sharing activities, and there are a number
of different projects and programs out there that are really trying to make
this data, even prescriber level data, more identifiable to physicians
themselves as a way to really help them understand what they’re seeing and how
they could be thinking about what’s in the best interest of the patient in a
different way and in a more data driven way.

AMIA as an association is working very hard on the secondary uses issue
which I think you’ll be hearing some about, and they have put out some guidance
principles that we believe are principles that should be addressed in a more
serious way and really well understood by NOA, the national agenda, but also by
the many stakeholders who are aggregating, analyzing and applying data for the
improvement of the health care system.

And lastly, we really believe that the policy and legal framework around
patient and prescriber information really does need to be taken into
consideration and looked at in a more affirmative way. Some more is needed here
to really set the appropriate framework to make this information not only
useful but appropriately used for all of us as Americans.

And that’s all I have today.

MR. REYNOLDS: Julie, thank you very much. We’ll hold our questions until the
end. Next, Steve Labkoff.

MR. LABKOFF: Hi, good morning. My name is Steve Labkoff. I’m Director of
Healthcare Informatics at Pfizer Pharmaceuticals, and I appreciate the
invitation to come and provide additional testimony to this Committee after the
work we did with AMIA back in July. And I’ve been asked today to give a talk
about the initial request around health data protections needed for health
information exchange.

And I took a little bit of a different tact with the answering of this
request in that I think most people, when they saw a request like that, might
be looking around the issue of things like ciphering and encryption and how do
you keep things clear, safe over the wire.

I took a different tact with it and will talk about access to data and
protecting access to data in health information exchange. And this first slide
actually, I hope, will summarize most of this talk in fact. And from a
pharmaceutical research organization’s perspective, and I work in the Research
Division of Pfizer, by the way. I work in Pfizer Global Research and
Development in the Healthcare Informatics Group. When a drug is being created
or discovered and developed, there’s a tremendous effort to acquire as much
data about that drug as is possible through the use of clinical trials,
randomized controlled trials and so forth, and that’s represented in this
graphic by the blue curve and the integral on that curve.

And as you’ll notice, though, when the drug is launched, the slope of that
curve actually doesn’t vary a whole lot. And the orange curve that is above
that represents information that’s generated about that curve when the public
starts to consume that drug in terms of millions and millions of encounters
with the molecule as opposed to photo trials which are usually measured in
thousands.

And the real issue here is that we perceive and believe there is a huge data
gap which we actually don’t have a lot of control over right now. While we try
very hard to understand what’s going on out there for safety, surveillance and
some other issues, that gap represents something that health information
exchange is actually able to help us bridge, and I’ll talk about the kinds of
activities that we can bridge through that in the upcoming slides.

We need to make sure that doctors, patients and regulators are well informed
about how they can – as well informed as they can be about the use of our
products. Pharma is expected now to find and meet unmet medical needs and to do
it faster, safer than we’ve ever done it before. That area under the orange
curve is represented by information that is locked in patient charts, mostly in
paper, something on the order of 80-85 percent of it is in paper these days in
laboratory results, insurance claims and electronic health records, in federal
government claims, databases and foreign governments, data sets, third party
aggregators. We believe that access to anonymized and aggregated health care
data will be critical in the role to achieve these expectations, especially in
domains of safety and surveillance and evaluation, the development of new
compounds, regulatory requirements, new drug indications, factors affecting
adherence and treatment guidelines, evidence-based medicine and clinical trials
recruitment.

We were asked explicitly to talk about some of the data sources that are in
use in the research arm of our business and have put together a small sample,
there are many other data sets that we procure, and what I’ve eliminated here
on the slide are the names of those data sets and the organizations that use
them and what they tend to use them for. I don’t have time in the course of the
ten minutes I’ve been allocated to go through this in much detail, but you have
this slide in your packets and can go through it, and I can answer questions
off line.

Just to highlight, I suppose, is that each of these data sets is
de-identified. It is generally speaking aggregated information, and it is used
for, as on the right hand side says, drug discovery research, outcomes
research, market analytics, drug development, clinical trials design and
clinical education manager which those are folks who go and do outcome studies
in the field with their clinician partners.

There have been a lot of efforts undergone in the past couple of years to
understand how these data can be used, especially data from health information
exchange which is probably the newest and probably the largest growing sector
for this type of data that’s out there. The first issue I’ll talk about and
first project we’ll talk about is one called the Slipstream Project. That was a
project undertaken by Pfizer, AstraZeneca, Wyeth and BMS along with Accenture
to examine use cases of how health information exchange information could be
leveraged in the R&D space.

And two major use cases were generated out of that and were presented at
NCVHS last July, and they were on pharmaco-vigilance and how to connect
patients to clinical trials. We also generated detail functional requirements
related to how to use this data for clinical research.

We also then late last year and early this year did a large scale project
within Pfizer, and it’s just been submitted to JAMIA for publication for as
late as yesterday, we submitted it to make sure it got in yesterday so I could
say it was submitted today, and the project was called Electronic Health
Records and Clinical Research. And we interviewed 35 senior leaders within the
R&D space at Pfizer and also in safety outcomes and clinical operations and
asked them, you know, if you had access to clinical patient data from health
information exchange, what could you use this for in your day-to-day
operations, and how could this be used in a way to speed up or make your
business processes faster, more effective, cheaper, whatever, and they
generated 14 use case categories and 42 specific use cases whereby these data,
if utilized, could enhance data speed through portions of their business
processes could take place. And they include and the major bullets here are
clinical trials outcomes research, the audit of medication, work flow, disease
modeling, safety, support of regulatory approval process and clinical
epidemiology. And that paper hopefully will be accepted for publication and
will be widely available within a short period of time.

Just as an ancillary piece of that project, we surveyed 15 of the 38 CCHIT
certified ambulatory care EHR programs that were on the market at the time of
the study which was December of 2006, and we asked them to look at the 14 use
case categories and to self-rate how they could answer or if they could
actively do the use cases or give data in those use cases. And as you’ll
notice, less than half of them were able to do most of these use cases, and in
many cases we believe they say they’re highly over ranked or over rated. For
example, there was a question about using supporting clinical regulatory
approval process, and five companies claimed that they could support that, but
no company that we’re aware of in our research actually supports 21 C.F.R. Part
11 for clinical audits necessary for regulatory submissions, and five companies
claimed that they could do that to that degree, although they weren’t
certified.

Summary of the findings here is that there seemed to be very significant
opportunities for EHR population health utilization in the research arm of
pharmaceuticals. Senior management sees the top use cases for clinical trial
improvement, drug safety and surveillance, retrospective analyses or
understanding disease mechanisms and observational studies in epidemiology,
outcomes research, Phase IV clinical trials.

And the senior management is also concerned about, you know, one of the
things they’re really concerned about is the data quality and its data
completeness. Spurious associations that can be made from these data, false
positives and adverse event detection and independent analysis that lacks
appropriate context. This has led to a project that some in this room are
actually very familiar with, a project that we’re doing to actually test this
data in one of those domains, and it’s a clinical safety domain, and it’s a
project that’s being spearheaded by the eHealth Initiative partnered with
Johnson & Johnson, Pfizer and Eli Lilly, the Indiana Health Data Exchange
and Partners Healthcare in Boston. We’ve generated three use cases to look at
how this data and health information exchange could be used to understand
clinical signals of known clinical events and see in a clinical health exchange
how that data can be interpreted to find these events and to better understand
how they would be identified in these health data sets.

One use case is the use of Statins in laboratory results or basically
aberrant liver function tests, Warfarin-related bleeding abnormalities,
documenting how designated medical events which are about 30 very serious
adverse events can be identified in clinical health data from electronic health
records.

The basic reason to do this is to study value and utilization of the HR data
for signal detection. It’s something that people have been talking about for a
great deal of time but hasn’t really been qualified or quantified to a
significant extent. So we’re engaging on this particular study.

So the last question, you know, getting back to the initial ask of the
Committee which was look around for protections, I asked the question what
needs protecting. Well, we absolutely need to protect the community trust that
data won’t be mishandled. That’s for safety, for privacy, patient’s identity,
confidentiality and any specific identifying information.

We do also need to protect the ability to do business. Privacy is not
compromised in aggregated health data situations. There are commercial and
academic research processes that need to take place for the advancement of
medicine in our country and in the world, and there are regulatory demands that
need to be fulfilled for the business of getting these products to market.

Some suggested protections of promoting the testing of the intrinsic value
of this data. More projects similar to the EHR project I just presented would
be something that we would absolutely support. Putting into place an
organization to moderate data stewardship, the national health data stewardship
entity as proposed by AHRQ would be something that we would support.

Addressing privacy, confidentiality issues, various checks and balances
through that process. Protecting access to data, legislation that protects
access to anonymized health care data for research both commercial and
academic, and lastly endorsing the AMIA, the American Informatics Association’s
stewardship principles and data analytics principles would be something that we
would also support. And lastly, what we’re trying to get through here is to try
and bridge that gap, to try and understand what the data inside that orange
curve represents, how it could be used to generate new drugs and bring them to
market for patients in this country and the world.

And with that, I want to say thank you and mine the gap.

MR. REYNOLDS: Thank you very much. Micky, if you would go ahead and proceed.
Micky, are you still with us?

MR. TRIPATHI: Yes, I’m still here.

MR. REYNOLDS: Yes, we’re getting your slides up, and if you would just say
next slide when you want the next one, please.

MR. TRIPATHI: Okay, great. Well, thank you for the opportunity to talk about
the Massy Health Cooperative. I think my presentation’s going to be quite
different than the others on the panel, and then I’m going to really be
speaking about what is we’re doing and what infrastructure we put on the ground
and what privacy policies we’ve built in as a part of that health information
exchanges that we’re launching.

So you use the first slide, please, I’ll just give a couple slides as
background because I think it’s important to understand what it is we’re doing,
who we are and what we’re doing to understand all of the policies that we’ve
rolled out.

So first up, you should be on slide one that says MEHC Routes on the top.
The eHealth Cooperative was formed so this – you have this up on slides so
the automation is there. I just want to make sure I know which one –

MR. REYNOLDS: Yes.

MR. TRIPATHI: Hello, I just want to make sure that I’m seeing what you’re
seeing. Some of the slides have animation, so I’m not sure when I should say
next slide. You should be seeing a full slide there that says ACP and Blue
Cross on the left and MEHC on the right.

MR. REYNOLDS: Yes.

MR. TRIPATHI: The eHealth Cooperative was formed in September 2004. We were
launched with a $50 million financial commitment from Blue Cross Blue Shield of
Massachusetts, and really with a project plan in our intellectual routes in
some work by the Massachusetts Chapter of the American College of Physicians
who led at this time by Dr. Alan Goroll and Dr. David Bates from the Brigham.
We, as I said, we were formed in 2004. We are backed by 34 leading non-profit
healthcare organizations in Massachusetts. Next slide, please.

Slide two is the organizations representing the cooperative. I won’t spend
any time on this unless anyone has any questions after. Slide three, it should
pilot selections with a picture of the map of Massachusetts. The $50 million
project involved selecting three communities in the State of Massachusetts to
essentially be colloquially wired for health care, and so I’ll describe that
project in a minute. But we invited any community in Massachusetts to apply to
be one of these three pilot projects. The red dots on the map there depict 35
communities who responded to the application, and we chose three communities to
partake in these pilot projects, Brockton which is in the bottom right there,
the three yellow stores, Newburyport which is up on the top right and North
Adams which is way out on the left on the border of Vermont and New York. Next
slide, please.

There are four main pieces the pilot projects have. I’ll just quickly talk
about the bottom two because those really set up for what we’re going to talk
about here. The first thing in each of the three communities, just to give you
a sense of the scope of this, it’s roughly 450 physicians who are participating
in the project across all three communities, and they practice in 200 office
locations roughly. So add mid-level on top of those 450 physicians, and you get
550 clinicians roughly practicing in 200 office locations.

And what we’re doing in the pilot projects is first we’re paying for and
implementing our electronic medical records in each of those office locations.
So we’re providing the hardware, the software, the implementation services, the
post-implementation services, work flow design consultation, all of that to get
them up and running on electronic health record systems.

And then the second box there which says connectivity, we’re creating three
stand alone health information exchanges in each of those three communities for
the exchange of patient identified information in real time for treatment
purposes, accessible at the point of care.

This is slide five, I believe, it should say MBEHC on the top right.

MR. REYNOLDS: Yes, we are.

MR. TRIPATHI: The way we’ve constructed these health information exchanges,
and this is a graphic that was developed in the North Adams community, is to
have a subset of the information that is resident in each of the individual
EMRs, have a subset of that information extracted and then merged together into
what it says here, the Community eHealth Summary which is essentially a
repository of patient centered repository in each of these three communities.

So on the left panel, you see doctor’s office record, and those are
basically elements of the record of the EMR in each of those practices that
stays in the practice, stays at the practice level, will never be shared in
that Community eHealth Summary. On the bottom left, you see the eHealth
Summary. Those are the items in the EHR that are extracted from each of those
EHRs and then put forward into the Community eHealth Summary and then merged
for a patient centric view from all the source systems in the community. Next
slide, please.

So this slide should eHealth Collaborative Architecture and Data Flows.
There’s some animation, so if you could click it once, what I show on the slide
is the data flows of all of the data that’s flowing through the eHealth
Cooperative Project. So we have provider level electronic health records. We’re
deploying four. NextGen, All Scripts, GE and eClinical Works, and then we have
a couple of legacy EMRs that were there when we started. One practice is a
Physician Microsystems which is now McKesson, and the others have EMDs. If you
can click the slide again, please, it should say Community Level HIE. Those are
rolling up into three stand alone health information exchanges, as I said, one
in Brockton, one in Newburyport, one in North Adams. The eClinical Works is
running the one in North Adams and WellLogic which is another integration
vendor is putting those together in Brockton and Newburyport. Next slide,
please, or click again, please. It should say MAEHC level QDW. Those three
health information exchanges are feeding a quality data warehouse that eHealth
Cooperative is using if you could click it once more, please, we should be up
at the top where it says MAEHC Level Analysis. The eHealth Cooperative is
creating that quality data warehouse for two purposes, (1) for providing
benchmarking data along nationally recognized quality measures, basically the
AQA recommended starter set, providing that benchmarking data back to the
physicians participating in the projects so that they can see themselves
benchmarked along those quality measures. And then we’re doing a whole series
of outcome analyses on connection between quality and health IT as a part of
the research project that we have.

The quality data warehouse is fed, maybe I can describe this in the next
slide, I think, yes, if you could advance the slide, please. It should say
Quality Data Warehouse Privacy Approach, slide seven, and if you could click it
once.

As I described, those health information exchanges are consent-based patient
identified data. So each of those is a repository. It is consent based, and
I’ll describe our consent policy in a couple slides here. But that is patient
identified data that, as I said, is live data being used or will be used for
treatment purposes. If you could click the slide again, please.

What we are doing is extracting out of those health information exchanges
limited data sets in HIPAA terms with no facial identifiers for the quality
data warehouse that we’re building. And then if you could click it one last
time, please, we are assigning random number identifiers that are unique to the
patient with the key held by each of the health information exchanges for
individual re-identification if necessary. And the reason we’re doing that is
because we’re providing this benchmarking data back to physicians and want to
be able to give the physicians feedback information so that they can use that
to improve quality of care.

The experience of a number of quality organizations is that unless you do
that, de-identification is very difficult for physicians to act on the quality
matrix that are fed back to them. We tried to sort of balance the need for and
the desire for having quality data and having this type of secondary use of
data. But without having multiple repositories of PHI’s floating all over the
place. So that’s, you know, this is the solution that we’ve come up with to try
to manage that.

MR. REYNOLDS: Micky, what’s supposed to be on the screen now. I’m not sure
we’re keeping up with all the lines.

MR. TRIPATHI: We should be on slide seven.

MR. REYNOLDS: Right.

MR. TRIPATHI: And it should say “Quality Data Warehouse Privacy
Approach.”

DR. COHN: I guess we missed the arrows sort of going through. Can you go
back –

MR. TRIPATHI: Oh, sorry, I think if you click it, there’s some animation
that rolls up.

MR. REYNOLDS: So which one are you on right now? How many arrows do you see
on yours?

MR. TRIPATHI: I’ve actually already gone all the way through it, but there
should be one that says Consent Based Patient Identified Data.

MR. REYNOLDS: We’re good now. We’re good. We’re set, thank you.

MR. TRIPATHI: Okay, all right. So basically the point is, you have patient
identified data in the health information exchanges, those three cams there,
and then we extract from that patient identified information a limited data set
that populations the quality data warehouse, but that is able to re-identify.
We are able to re-identify as necessary back through the health information
exchanges to provide that information back to the physician. The quality data
warehouse – the manager of the quality data warehouse never have those patient
identifiers. So they and the researchers are not able to re-identify it. Only
back through the health information exchange and back to the physician office
where the re-identification happens.

Next slide, please. So as I said, these health information exchanges are
permission based, and the next couple of slides, I’ll just describe that and
then I’ll stop.

The eHealth Collaborative took our privacy approach is really based on sort
of a couple foundation principles here. First, we needed to decide what the
patient notification and consent was going to be. It’s certainly not required
for stand along electronic health records, but we were exchanging, we were
creating these health information exchanges that are exchanging data across
legal entities. So we needed to do something.

In Massachusetts, I should say that our legal counsel is McDermott Will
& Emery who provided very valuable guidance in this, and in Massachusetts
our consent requirement preempts HIPAA along certain dimensions, and the
general consent principle is based on case law, Alberts v. Divine which
found that in an affirmative consent is required before disclosure of
information to another legal entity, and a second affirmative consent is
required for disclosure of sensitive information.

And there’s also a specific statute that requires specific permission for
disclosure of results of certain genetic and HIV tests, and those tests are
specified in the statute.

And you know, importantly the HIPAA notice of privacy practices does not
count for the Massachusetts consent. So there is a separate consent that’s
required in Massachusetts today.

And certainly certain types of data exchange already happen today under
prevailing consent process. It happens by fax, phone, mail, email. We know it
happens all the time, and in the ambulatory setting these exchanges are almost
always point to point stemming from discreet episodes of care with physicians
directly involved, and the treatment for that episode is care.

And so, you know, as it happens today, consent is already being gotten for
those point to point exchanges. Next slide, please.

However, the health information exchange and particularly with a repository
is a new type of exchange, I would argue, and therefore we felt that we needed
a new consent for that, that we couldn’t essentially piggyback on the consents
that are already out there.

And then really in particular, the things that we thought were qualitatively
different were (1) about who can access this, that it is persistent data held
at the center, there’s no person in the loop, and it’s not related to a
specific episode of care. So any authorized users on the network, for example,
not just those directly involved in treatment in an episode of care will be
able to access this data.

It is any data from any of the authorized sources, I mean, from that list
that I described earlier. And then when is it available. It’s available any
time, not just during the time period of a specific episode of care. So for
those reasons, we felt that we needed to have a new consent process for this,
and the consent policy’s sort of the core features are that it’s an opt in
approach where a consent is gotten at the point of disclosure. So essentially
it’s at the legal entity where consent is required before the data is disclosed
through the network, and we have a slightly different model across the
communities. In Brockton and Newburyport, that consent is gotten entity by
entity which means, and those are legal entities. So that means that a patient
will be asked for permission to disclose their information held at a given
legal entity to the network at each of those entities, and that gives the
patient the opportunity to say yes to a primary care practice, for example, and
no to a psychiatric practice, for example. So it would allow a patient to have
that opportunity to say yes and no by entity.

In North Adams, there is a global consent that’s really a function of the
tightness of that community that essentially we were able on a consent form and
by the agreement of all of the providers in that community to have a single
consent that in effect deputizes every one of those practices, every one of
those legal entities to get consents on behalf of behalf of every other legal
entity in the community. And a patient, when presented with that consent, is
able to see on one form every single practice that is participating on that. So
they can either opt in or opt out based on their being able to see, you know,
every entity that would be contributing data and, on the other side, accessing
data in the repository.

The period of consent essentially covers all episodes of care until a change
in opt in status or the consent expires which will be in two years we would
reconsent. And what I mean by that is this is persistent data held at the
center. So we’re getting a one-time consent upfront that is an opt in, and from
that point forward all of the patient information will flow into the network in
this repository until a patient changes their status or the consent expires at
which case we will refresh the consent with the patient.

The next slide really is just a schematic depiction of that. I don’t know
that we need to know that or go through that in the interest of time. So I
would ask that we just skip to slide 11 which is a bar chart showing opt in
rate, and this will be my last slide.

Here, I’ve given the opt in rate for North Adams which is the first
community. Their health information exchange is up and running. It went live in
May, and as you can see depicted there, the opt in rate is 94 percent across
the community. Each of those bars represents the opt in rate for an individual
practice in the community, just to give you a sense of the variation there. And
some of the practices over in the right, what you see in that variation, that
practice that’s 50 percent there are only seven patients who were asked there.
So there’s a small sample issue there. But you tend to have more of the
specialists on the right and more of the primary care on the left. The
specialists in general seem to have a lower opt in rate because they are
specialists in essentially a rural community. So they’re seeing people from the
whole western part of the state some of whom really aren’t a part of the North
Adams community, and therefore they end up, they tend to opt out. That’s our
sense of who’s not opting in.

But let me stop here and look forward to discussion going forward.

MR. REYNOLDS: Okay, I’d really like to thank everybody, first, for your
crispness and second for the richness of your data. So with that, I’m going to
start first with Mark Rothstein and then see who else has questions.

MR. ROTHSTEIN: Thank you, Harry. I want to commend all four of our panel
members, and I’m sure the rest of the group has lots of questions. But I will
just limit mine to one, of course, and I want to talk about the New Hampshire
case or ask a question about it and try to relate that case to what it is that
this working group and this Committee is doing.

Whereas the New Hampshire case really pits the state and the docs against
the pharmaceutical industry and the information vendors, our cut is different.
We are trying to balance and accommodate the privacy and confidentiality
interests of patients with all the beneficial purposes of disclosure of
information in terms of quality improvement and the like. So it’s a somewhat
different cut. But with regard to the case itself, and I’ve tried to phrase my
question in the positive and the negative, and it won’t work in the positive,
so I’m just going to have to give it to you in the negative which won’t
surprise too many people.

Should patients have a right not to have their health information used for
marketing purposes without their knowledge or permission even when the
marketing is directed at a third party and even when their information is
disclosed in de-identified or anomymized or some not readily detectable form.
And that’s not an issue in the case, but it is an issue for us.

MR. FLYNN: This is Sean. I’m happy to respond to that, and I’m sure Julie
would like to respond in a different fashion. As I tried to incorporate into my
presentation, we believe there is a patient interest at issue here, and it will
be raised in this case.

Although HIPAA requires and pharmaceutical records, prescription records are
devoid of a name, they are not devoid of who is treating that person and
exactly what conditions and treatment that person is receiving. It’s not very
hard to go from I don’t know, there are allegations, but I don’t know if it’s
true or not whether companies are able to actually use that information to
identify people.

After HIPAA, there have been individuals who have received drug marketing to
their homes, and there has been a concern that was expressed in the legislative
record that HIPAA has not been adequate to actually keep patients from being
identified.

But even if they’re not identified, they are being targeted. You know, a
patient as an individual is being targeted for a change in their treatment. And
whether they’re listed by John Smith or number 2207, their treatment is being
affected by individualized marketing based on an observation of their treatment
by a for-profit entity that’s not necessarily interested in their best
treatment. They’re interested in what’s going to sell the most drugs.

So I think there is a huge patient interest, and that’s part of the argument
of why you can’t just take the patient name off, you have to take the
prescriber name off as well because it’s, when you have the prescriber
identified, now you can target an individual office.

MS. MURCHINSON: Sean, I assume you’re done.

MR. FLYNN: Yes.

MS. MURCHINSON: This is Julie. I would just say that if I understood the
question correctly, it was definitely a good convoluted question. I think
patients’ rights should be vested in their interest who has the best care, and
in order for us to have a consumer driven health care environment, consumers
need to have more legal rights to their information so that they can direct
their information to be stored where they’d like it to be stored, to be used
how it should be used.

So in that context, I believe consumers should be interested in this law not
preventing them from receiving the best care. So whether it’s post-market
surveillance activities or their doctor being informed of the best drug or
device that they could be utilizing for who they are as patients, you know, as
we develop more information about patients, perceivably that is a future
reality.

You know, even as was pointed out by the people from Pfizer, this
relationship between clinical research and patient information, I think
patients should have a right that helps them make sure that their care will be
the best. Does that address your question?

MR. ROTHSTEIN: No, but that will do just fine, thank you.

MS. MURCHINSON: Sorry.

MR. REYNOLDS: Micky, I can’t see your hand if you wanted to make any comment
on any of this.

MR. TRIPATHI: No, I didn’t have any comment.

MR. REYNOLDS: Okay, good. Justine.

DR. CARR: Thank you. This is a question for Micky. You showed at the end the
opt in, the North Adams experience. Was it different in the other two
communities where they could differentially opt out?

MR. TRIPATHI: Yes, we haven’t started the consent process yet in the other
two communities. We’re going to start it in the fall in September.

DR. CARR: Okay, thank you.

MR. REYNOLDS: Okay, Simon.

DR. COHN: You know, I think that I first of all want to start out by making
a disclosure, too. Somehow when we get in these conversations, you don’t always
think about them when you begin the day. But I think as you all know, I work
for Kaiser Permanente, and I just want to comment that Kaiser Permanente as a
general rule the Permanente medical groups do not allow drug detailing. So just
put that on the table. Though certainly that does not in any way impact state
legislation or opinions on all of that.

Julie, I actually had a question sort of for you, and I just wanted to sort
of better understand where your, a couple of comments you made as well as
exactly the positions you represent. And I will apologize, I am not an
attorney, so I’m not quiet so nuanced as some of our others are in terms of
looking through the presentation.

And I really had two sort of separate questions. Now number is, is your
position, are you representing also the plaintiffs’ positions in all of this
work, or are you just representing amici briefs or whatever in the comments,
number one.

And number two is you made a comment that I just wanted to better understand
it. It seemed to indicate that you felt that limitation on secondary uses of
data such as was being proposed or discussed appear to have a chilling effect
on health information exchanges, and I just wanted to understand a little
better your views about the chilling effect on that. And if I’m overstating
your comments, please let me know.

MS. MURCHINSON: Sure. Let me say first we technically are representing the
amici and working on behalf of the plaintiff, just to clarify the relationship.
Does that address question number one?

DR. COHN: Yes, thank you. As I said, I just couldn’t tell at your overheads
exactly the relationship there. So thank you.

MS. MURCHINSON: And your number two question was about why we believe that
some of these secondary uses of data, if this law were in fact passed, it could
have a chilling effect on HIEs, is that what your question was.

DR. COHN: Yes.

MS. MURCHINSON: So sure, you know, from our perspective, it comes back to
the privacy and security at the core. If the law really enables physician
privacy of information and creates different access to information across
different state lines that the privacy and security policies, if you will, will
not be as interoperable as they could be to facilitate health information
exchange. I think that’s point number one.

Point number two is that there are a number of activities that we believe
can be had by using patient de-identified prescriber identifiable data that get
down to the level of physician or clinician behavior that allows for a pay for
performance and allows for specific actions or activities that can be
supportive of transparency of quality and cost related to what’s happening at
the point of care and not prescriber identifiable information is important to
be able to not only know and monitor that, but take action on that. Does that
answer your question?

DR. COHN: Sure, but I just want to comment also. Thank you. Please.

MR. FLYNN: Thank you. I just wanted to respond to a little of this. There is
actually a key position that Julie took that I actually agree with. There’s a
lot that Julie said that I agree with, and it’s always a concern to make sure
you’re addressing the unintended consequences of legislation.

But there’s one point that I don’t have her. On page five, she mentions, I
mean, a core part of the argument she was raising again is this idea that we,
speaking from her for a moment, we don’t want this idea of physician privacy to
become a barrier to quality control or monitoring cost effectiveness or
monitoring quality, make sure that doctors are using evidence-based prescribing
techniques, et cetera. And this is a perfect example coming from you. Kaiser
doesn’t allow pharmaceutical marketers to be a part of that process, but Kaiser
monitors its own physicians and their prescribing practices in making sure that
they’re using the best evidence-based practices, and we completely agree with
that.

And we also, none of my clients want to erect a Chinese wall between doctors
and all the different authorities that may want to monitor their practices for
various health based evidence based prescribing purposes. The difference is we
don’t believe the pharmaceutical companies should be part of that chain not on
individual monitoring and individual prescriber basis. Their interests are not
perfectly aligned to promote evidence-based medicine in this aspect. And that’s
a decision that organizations like Kaiser has taken, but that it’s very
difficult to take for a large number of partners that are not part of a similar
organization. So –

MS. MURCHINSON: And I’ll just say one more thing on Sean’s comment. Thank
you, Sean, and I think that one of the issues is, you know, there are a number
of other solutions that has been considered or put in place to address some of
the concerns that come from aspects of detailing activities.

So I think part of the amici position is that this law is in fact not
necessarily the best way to address some of those concerns.

MR. REYNOLDS: Okay, Micky, I’m going to ask the last question of you, and
then we’ll move on to the next panel.

So 94 percent is pretty impressive as far as opting in. Who actually talks
to the patient, what kind of a document is involved, and is it really, are they
really – this is not a challenging question; this is a probing question.
Are they, do they really understand what they’re signing, or is it kind of an
easy thing to do. No, we all deal with the HIPAA privacy notice, and we all
sign it to get surgery. That doesn’t mean we understood it. You opt in, and so
it’s just a question.

MR. TRIPATHY: Yes, no, I think that’s an excellent question because we spent
a very long time working through how you make this an educated process that
isn’t going to meet the standards of informed consent as we know of that term
for clinical trials and things like that. I mean, it’s certainly not going to
meet that standard.

But we want it to be more educated than what we all typically do with the
HIPAA notification which is a sign and move on like you do with your mortgage
documents. So we created a set of educational materials that go along with
this. We’ve posted those on the website. We’ve had a series of community events
on the radio and other public type events, and the actual opt in process
happens at the registration desk of any of the practices where the brochures
are there and the actual consents are there, and we’ve had community training
sessions for all the front desk personnel for walking them through how the
consent process should work, what the background is on this, and giving them
FAQs and various other materials to help inform patients.

So the typical process is that a patient will walk in. They’ll be handed the
consent and the brochure, and some of them will have follow-up questions which
they can pursue with the front desk stop, or sometimes they’ll pursue it with
the doctor, and we’ve trained all the physicians as well and informed them that
we expect them to have the discussion with their patient if the patient wants
to carry it further beyond that. So we’ve had sort of multiple layers on that.
I mean, Brockton and Newburyport, we’ve actually hired a professional branding
firm to help us with materials. Those communities are a little bit bigger, so
to help with educational materials that we want to be sure will be read.

MR. REYNOLDS: Well, thank you very much. Mary Jo, did you have a question,
or did you have a comment?

DR. DEERING: A very quick request, actually. I’m Mary Jo Deering. We’re
actually having a panel this afternoon that gets at communication, and I’m
thinking it might be very interesting to see your materials. And I personally,
and I suspect the Workgroup, would really love to see a sample of your
educational materials and the consent form. I’m going to say the same to
Monica. You said that you have a pretty strict brief that people have to follow
when they’re communicating. So I think collecting examples of what have been
proven effective in terms of the opt in, opt out decision might be useful.

MS. JONES: I’ve downloaded a couple and brought it with me. So you can pick
it up.

MR. TRIPATHI: I couldn’t hear it that well, but what I heard was an interest
in seeing those materials. I’ve be happy to provide them.

MR. REYNOLDS: Yes, that would be great. Well, listen, again, thank you to
this – yes, Micky, we’ll have an email sent to you on what we’ve
requested, and that would be great. Or you’re saying you want to send it, okay,
good. I’d like to again thank this panel. I really appreciate it.

I’d like to move on next to LaTanya Sweeney from Carnegie Mellon University
on de-identification. Ready to go? Good, thank you.

Agenda Item: De-Identification

DR. SWEENEY: So my name is Latanya Sweeney. I want to thank you all for
allowing me this opportunity to speak with you. I know that you don’t have
printed copies of this. One of the reasons is you’ll see a lot of my slides are
graphic in nature, and rather than just fill in the gap with something, I think
it would be nicer if I printed the graphic. So I’ll put that on my website
which it will be at this address there, and you’ll be able to download those.
I’ll also make them available to whomever you tell me to distribute to.

So what I want to do, the bottom line of what I want to share with you today
is what privacy vulnerabilities do we know actually exist in secondary sharing
of personal health data? What kind of solutions do we know work, and what are
their limits, and consent is horrible in today’s setting.

First of all I want to thank you again for having me, and the over arching
question that sort of guides the work that we do is how can we share data with
guarantees of privacy protection while the data remains useful.

We’re not an advocacy group. I’m a computer scientist by training. I work in
the Computer Science School at Carnegie Mellon, and I run a lab where my job is
really one of data intelligence. That is, we’re really good at figuring out
what kinds of information can be strategically learned out of data, and we sort
of do that. We call it data detective work, and then if we’re really good about
learning sensitive information out of data, we can often advise how to build
technologies so that in fact we can control what can be learned, i.e., privacy.

We’ve had the wonderful opportunity to work in the real world environment
over all kinds of information, basically every major problem society has had
has somehow found its way to our door mainly because a lot of our work is
funded by people, by companies actually, not the government who actually have a
burning need, an immediate need to solve a privacy problem.

This is just some of the team members whose work I’ll talk about. So I
didn’t do it all myself.

One of the things that we found in a lot of our work is that a lot of times
in these kinds of conversations and discussions, there’s sort of the idea that
in order for me to have privacy, the data can’t be very useful, or for the data
to be useful, I have to give up a lot of privacy.

Now we’ve been able to show a lot of the environments you just —

MR. REYNOLDS: Excuse me, we’ve got people on the Internet and on the phone,
and the further you get away, the – you’re telling them good stuff,
they’re not hearing it right.

DR. SWEENEY: Wow, that’s cool. What we’ve been able to find is a real sweet
spot where we’ve been able to show how it is people can have both the privacy
and the data that they’re looking for. And so I’d like to give you a couple of
examples of those today.

The way I organized the rest of the talk is first I want to talk a little
bit about anonymity versus de-identification and then jump into the issue of
demographic re-identification, the issue of multi-stage linking and sort the
overall lessons learned from those experiences.

So what we mean by re-identification is very clear. We have a person, say
Ann, who goes to a hospital or a particular facility. Information about Ann,
known as Ann, gets stored by a data holder. That data holder decides to share
it subsequently, a very common thing nowadays is to remove explicit identifiers
shown in the diagram by just removing her name, but still other information
about her in this case, part of her birth information and her zip code and so
forth is shared.

And in prior work, we’ve been able to show how that can be re-identified
through external data sufficient to re-identify Ann that I could actually
contact her. So when I use the word re-identification, I mean that it’s gone
all the way to the point where I started with data that seemed innocent or
innocuous, and I was able to actually re-identify the person that was the
subject of the data.

The first example I want to give you is one that won’t be as charged as
health data because it’s not your area. If I was talking to Homeland Security,
it’s a different issue. But we can learn a lot by looking at this. This is a
question that has come up a lot that we’ve worked a lot on, not health data,
but all of the issues are exactly the same, and it has to do with video data,
that is, how can we share video data where I can kept as many facial details
about you in the video, but yet I can prove that no one could be re-identified.
Assume the face recognition software is just perfect, how can we do that. Those
things are for 42nd Street where we do surveillance.

One of the interesting things that we learned right away is we didn’t think
this was a particularly difficult problem. We said all you got to do is hide
part of the face. We could put bars over eyes or nose and so forth, and it
turned out that trying to do each of these things, these sort of ad hoc things
didn’t work. That is, the face recognition used in its optimal settings was
very robust and would find other features in the face to re-identify people.

So then we said, well, let’s try pixilation which you see on television
often. We even tried gray scaling. We even tried Mr. Potato Head, which is
basically pasting on other people’s eyes and nose and mouth. So shockingly,
none of these techniques worked either. And in fact, pixilation actually
improved face recognition software because it got little additive noise around
the eyes.

So left with that, we had to sort of invoke a new approach, and that was
sort of this idea of how it is in real time, we could actually peel the face.
My graphics isn’t working, but that’s okay. Often real time in the video and be
able to modify that face.

I see. Now that one works. That just shows you how fast the software can
work. So what we’ve done is basically develop this, and we’ll just click all
the way through, idea where we can take a face, extract the face image. We can
then tweak anything we want to about the face, and then we can re-render it or
morph it back into the video. And what we morph back onto the video in this
particular work is what we call K-anonymized data that is we took K similar
faces, and we averaged them together. So the faces that you see at the end
aren’t really anyone’s face, any one of these faces but somehow it’s the
average of all of them.

Also we noticed the larger that K gets, the more attractive the face gets,
which is kind of interesting. So in the use of – this actually ironically
never started in health data. The uses of this in health data, though, I think
you can see right away. They’ve come back to a lot of clinical trial
information. But also two tools that are meant to be used in the home that
require video surveillance for the aging and things like that where people who
are in the home don’t necessarily don’t want to be videoed.

And so here what we see is basically the face being peeled off of the
person, and then we can see without any other information about them their
appearance, their expressions and so forth. So in the case where it’s showing
up in nursing homes and places like that, this technology allows physicians and
psychologists to be able to understand how much interaction did the person
have, were they being yelled at, that kind of thing, what was the physical
reaction.

Now I want to take this opportunity to talk a little bit about what was
learned about this, especially as it relates to the kinds of interest that we
have in medical data. One of the things is that we labeled this one
de-identified, but we might label it, we could say it’s anonymized if I could
prove you could never figure out who he is. If I stuck this face right where
the green mesh is which I can do pretty easily since I know where the green
mesh is and I know how to put that on, the problem is I’m still left with a lot
of other details that aren’t masked that I might still know him. So the
terminology we tend to use is the difference between de-identified and
anonymous data. Is anonymous clearly a higher standard, one that I could prove
can’t be re-identified. Well, de-identified data sort of has that ad hoc feel
to it that in fact it’s sufficient for our needs leaving you, of course, with
an effort argument.

So what I’d like to do is now take the lessons learned in sort of a safer
area, safer because it’s not our area of discussion, and then I’ll try to move
that into our area of discussion.

Many of you sort of may know of me from earlier work I did sort of pre-HIPAA
showing that medical data like the kind released at that time in hospital
discharge data which include diagnosis, procedures and also basic demographics,
could be linked to population registers, and this example of burger lists to
put back onto the de-identified data the name and address, and all that I
needed to do was use basic demographics to do this re-identification.

And what I was able to show that in fact in the United States if you take
date of birth, month, day and year of birth, gender and the five-digit zip
code, that uniquely identifies 87 percent of the population. What this graph
shows is the size of the number of people who live in the five-digit zip
population and the horizontal axis is the percentage of that population
uniquely identified. So as you would imagine, the smaller, the fewer the number
of people who live in this zip code in general, the more identifiable. We see
100 percent of the people being identified. And in general, the larger the
number of people in this zip code, the fewer the people.

But let me point out just a couple of interesting data points. And one is
zip code 6023 which is in Chicago, Illinois. It has over 100,000 people living
there. Now even if you aggregate HIPAA standards to the idea of only using the
two or three-digit zip code, many in the places in the U.S. still don’t give
you to 100,000 people.

Yet, there aren’t that many people over the age of 55 living there. So those
who do tend to stand out, and as a result they’re easily re-identified.

Another interesting data point is in Suny New York. There are only 5,000
people who live there. They all tend to be between the ages of 19 and 24. I can
tell you lots and lots and lots of things about them. We can’t figure who is
who, and that’s because these are all students at the University, and these are
dorms, and they’re so homogeneous a group that I can tell you lots about them,
but I just can’t figure out who I’m talking about, plus they’re very mobile.

So that example, that earlier work was really important because it showed
first of all the power that demographics can have in re-identification. And the
second thing that it showed also in that second part of the example is that if
I try to proscribe a remedy through policy, it’s probably not going to be
perfect because I don’t know whether you live in Suny, New York versus whether
you live in that zip code in Chicago. So I can’t get it quite right.

But I think there’s a general sense this is sort of demonstrated in this
chart. What you see here as I go up vertically, I’m aggregating geography. And
as I go horizontally, I’m aggregate age information. And so the 87 percent
begins to get smaller and smaller as we go up and outward.

And in fact, if you only know county, gender and year of birth in the United
States, there are still some people we can re-identify. Now they tend to Yogi
Bear and Yellowstone and people like that. But the point is there are still
some people.

We did a study to figure out where does the HIPAA safe harbor come in, and
the HIPAA safe harbor comes in right about 0.04 percent. So it’s the equivalent
of your birth town or the equivalent of county, month and year of birth as an
example.

What that’s important to say is that we didn’t expect the HIPAA safe harbor
to be perfect because for the examples we saw earlier, perfection is not going
to be possible by prescription. And in order to get precise privacy and precise
utility, to do better than those, we’re going to find something better than
these crude statements in policy.

I’ll skip the HIPAA thing. I think you all know. Not only does it help
re-identify some people, but it’s also pretty useless. One technology that we
did transition out from the school university is a part called privisor, and
what it basically does is it says, okay, I’m going to figure out how many
people could be identified in your data. So you give me a data set, I’ll figure
out how many people can be re-identified.

And if that number is no more than 0.04 percent, we’ll certify it as HIPAA
compliant. And the reason their view of that is simply legally that the safe
harbor has a risk of 0.04 percent, then if you can change the fields around and
get some other set of data elements, or you can show that they don’t put the
public at any more risk than 0.04 percent, you’ve done no more harm than the
safe harbor would have done even though the data elements that you’re asking
for may be those even explicitly prohibited, and we see a view of that.

So that gives us a sense first from the face identification of anonymity
versus de-identification how basic demographics work which is the policy
solution there is sort of HIPAA. HIPAA sort of came in and said we’re going to
address explicit identification.

So the question is, is that good enough, and how does that carry on. I want
to introduce a notion of how we measure identifiability. So we have a
population who consists of these six guys. I don’t know if they’ll have a next
generation, and then we release some people out of the population shown by
these images.

One other person, even though you’re not able to see the colors, is an exact
match. He turns out to be both green with the same shape head, where this guy
would be ambiguous. Unfortunately, the colors didn’t come through. So you can’t
tell.

But the point that we’re trying to make is that by masking this guy and
having only to go by his profile and sort of the overall coloring, he would
only relate to one person, where this guy masked out going by his just shape
would relate to two people. So that just gives us the terminology we need.

This particular project was a bioterrorism surveillance project early on,
and this is essence to the resurveillance here in the DC area. And this was the
kind of field so it was being requested from emergency rooms and other places.
And you can see that is somewhat pretty identifying even though there’s no
explicit identifiers. We know it’s identifying because of the conversation we
just had about the zip code and date of birth and gender, three that we just
got through talking about are sitting right there. In the early times, the
unique patient identifier was the patient’s social security number. So that was
kind of another problem.

And the question was how could we actually allow the data to go to an
outside surveillance outside of the hospital setting and still be HIPAA
compliance because at that time there was no exemption for public health. And
so what we do is we see how identifiable is it that the data that they’re
actually asking for once we take out the social security number so it will be
obvious. And we found that 75 percent of the people coming through that data
set were uniquely identified. Ninety-four percent were either uniquely
identified or ambiguously identified with one other person. So a then size of
two means that here’s a record, and here are two people by name, Alice and
Joan, and that person, I can’t tell which one, it’s either Alice or Joan. Where
a uniquely identified then size of one means, ah, that record is definitely
Alice.

So as you can see, this data would be considered by most people’s standards
highly identifiable or re-identifiable. And how is that possible. Well, it’s
the same as – the way that happens is the same old re-identification that
we had talked about before. So there’s really nothing new there.

The question, though, is how the heck do we go about fixing it because our
job isn’t just to point out problems, we need to actually fix it. So one of the
things we said we found out that they didn’t actually need the full date of
birth. They could actually aggregate it to month and year of birth without any
loss in the algorithms that we’re using to determine whether or not there was
some anomaly that day.

And when we did that, we saw that it dropped the identifiably which looks
pretty good. The pink thing you see down here is where the HIPAA safe harbor
would be if they had used HIPAA safe harbor. They can’t use it because it’s not
useful to the way their algorithm works. But you can see we’re making the
progress. The data’s still useful, not quite ready, not quite comparable to the
safe harbor.

So then we say, well, could we generalize date of birth some more. And then
it didn’t matter seemingly how much more we could generalize in date of birth.
Just even five-year ranges, ten-year age ranges, we could not get the line to
go lower. And so when we looked at exactly what the problem was, it turned out
that the strategy was the following. That the bioterrorism surveillance data we
saw before was linked out to here. But what we’re seeing now is that it was
using hospital discharge data and was linking on diagnosis, gender, visit dates
and zip and which is really kind of a shocker because these are the things that
we don’t normally think about as being sensitive. But in combination, they
turned out to be quite identifying.

The other thing that this shows is it doesn’t matter how much I aggregate
age in the biosurveillance data. It’s not going to have any impact on the
actual field so it’s the subject of the linking. And because the hospital
discharge data we were looking at had month and year of birth of patient, this
gave me automatically back the month and year of the patient irregardless of
what I did in my own data set.

And so, therefore, so the identifying data we were really seeing was this
being linked to our other list. So the way we solve the problem, though, is we
then had to break or aggregate at the field level one of the fields that are
actually causing the linkage. And it turned out that, thanks to some great help
from the Omni and other people at CDC, we were able to group these diagnosis
codes into syndrome classes, which is currently what’s used. And syndrome
classes collapse the diagnosis code, and then when you try to link it, it
doesn’t link very well. So we were able to squish that line all the way down to
a link that was very comparable to the HIPAA safe harbor. It’s not at zero, but
it’s very comparable in terms of its identifiably.

So, and that’s sort of what their data set looks like now and that’s from
New York. Now why is this useful. Because what legally what that allowed them
to do, I mean, now of course they have exemptions. But in the days before they
had exemption, what it allowed them to do was to do what was termed selective
revelation. That is, we were able to take their algorithms of how they detect
an anomaly was happening and render them sufficiently anonymous. That is, I
could prove to you that no more people are put at risk for re-identification
than were in the HIPAA safe harbor, and they were able to use that from normal
operation.

Then when something would happen, a trigger would happen, it would lower the
identifiably of the data so that they could get more detail on those cases that
were seemingly to be a problem. And if there’s still that more evidence, they
would use a more refined algorithm. But by this point, public health law took
over because once public health law knew that it was something explicit, they
could demand the explicit identified data.

So this idea of selective revelation is a very powerful one. It fits very
nicely into our notion, our legal structure already because it sort of is like
search warrant protection, this sort of the idea of a reasonable cause
predicate being satisfied through the technology. And so it allows us not to
have to change the laws, but to be able to use existing firm works.

Okay, now I know, let’s pull that closer to the kind of data that we’re
talking about. We’ve been able to do the same kind of two-stage link attacks on
pharmacy claims data and on clinical trials data. So this is an example of
pharmacy claims re-identification. Now I didn’t put up what the fields were,
but let me tell you they don’t include the explicit identification of the
patient. We didn’t use the fact that the doctor’s identification was provided.
Instead, we used again those same ideas of using the relationship between the
diagnoses, and for gender we used an algorithm that helped compute based on the
hospital’s reported statistics of where people come from when they come to
their hospital.

This doesn’t work across all medications. But for medications of interest to
certain pharmaceutical companies, we were able to produce the results that we
showed you. This was actually work funded by pharmaceutical companies.

So you can see this doesn’t work at all like the HIPAA safe harbor even
though the pharmacy claims data is neither explicitly identified and is also
not covered under HIPAA.

We’ve also been able to show similar using a different technique – we’ve
been able to re-identify DNA databases.

DR. COHN: I’m sorry, using, say again, the pharmacy claims data is not
covered under HIPAA. Could you clarify that?

DR. SWEENEY: Sure. Under the – so if you ask a particular
pharmaceutical company how it is they got pharmacy claims data, it’s not
because it was part of the – it may not have been necessarily through
insurance claims is the point that I’m making because it could have been data
provided through the pharmacy network themselves. It could have been data
provided through other organizations that do claims clearinghouse and so forth.
So that’s –

DR. COHN: It sounds like it’s covered by HIPAA. So I’m just –

DR. SWEENEY: I don’t have the slide. But I’m glad you brought that up. We’ll
put them all together. The other culprit of this problem is hospital discharge
data which is also not covered under HIPAA, right. So I live in the State of
Pennsylvania. A copy of every time, if I go to the hospital, a copy of that
information is forwarded to the State. This all happened as part of the earlier
1990s. People didn’t understand why health care was so expensive. One of the
things that did happen was an explosion in the collection of hospital discharge
data. So a copy of that claims information goes to the state, and then the
state provides publicly and somewhat publicly available versions of that data.
And the AHRQ used to, when they were called AHRQ, used to provide versions of
that data as well. So collectively, I’m turning that hospital discharge data.

When HIPAA came along, that’s not data that’s covered by HIPAA because
they’re not a part of the insurance structure that HIPAA originally began to
cover. That is, they’re not a part of the medical claims processing. So that
data, and they’re not obligated to oblige to the HIPAA safe harbor provisions
and so forth, and many of the data that we were getting was not adhering to it
at all, not voluntarily. So that’s the example that you see here.

MR. ROTHSTEIN: Once it reaches the state, it’s not covered by HIPAA. But
when it’s disclosed by the hospital, the hospital is a HIPAA-covered entity.

DR. SWEENEY: Yes, they’re a HIPAA covered entity. But HIPAA can’t stop the
state law that requires the data to go to the collection agency, the hospital
discharge data. And so that’s why we say it’s not covered under HIPAA because,
to the extent that the – the question for us and the work I do is where is
the data covered, not if the person, yes, the hospital is covered, but
essentially they’re not totally covered because they have this other place
they’re obliged to provide the data to.

MR. LABKOFF: Can I ask a question. Hi, one question, please. You keep
mentioning this issue about data being re-identified. Can you just clarify
please, you know, the NSA can re-identify things, too, but it takes massive
computer power and a lot of effort. Are you describing things that are easy to
do, likely to do, or really hard to do?

DR. SWEENEY: This is pretty easy to do. Well, someone’s talked about what’s
the level of effort required. To do the linking from 1997 was talked about
before, that kind of linking. I basically take this data set, and I link it on
this data set, and any access database, any kind of regular database, that’s a
simple match question. Give me all the records where the zip code, date of
birth, gender or version of this date of birth matches in both data sets. So
there really that’s a one line statement in a database. Now suppose I don’t
have a database, suppose I don’t even know anything about databases. But you
know what? I’ve got a little time. How could I do it.

Well, I load this data set into Excel, and I load this data set into Excel,
and I sort it first by probably date of birth, and I pull out the ones that
have the matching dates of birth, and I just keep working, and then I resort
those by zip code or do a three-way sort if I wanted to. And I can then figure
out which ones match. So it’s a little more laborious, but certainly I can
effectively get the same result. Yes?

MS. SOLOVEICHIK: I’m Rachel Soloveichik from the Bureau of Economic
Analysis, and it seems like this is data that’s pretty public knowledge because
I know when my friends go to the emergency room or something.

DR. SWEENEY: Right. So what we do is we – this sort of gets back to the
question I was just asked a moment ago because he had two parts of his
question. One was the complexity of what’s involved in doing re-identification.
The other part of his question, though it was sort of glossed over, was how
accessible is this data.

I don’t have it on this slide, but I have a slide that has a continuum. At
one end of the continuum is privately held data. At the other end of the
continuum is publicly held data. By publicly held data, we mean that pretty
anyone can give you. You may have to sign a form. You have to pay a nominal
fee. But pretty much anyone who asks for it can get it.

Semi-public data is data that the fee might be substantial, or you have to
be one of a group of people. Like, for example, a pharmacy might sell its data
to a pharmaceutical company. It may not sell its data to some other party who
might choose to make these kinds of divisions. Or some data just simply is
expensive. We call that semi-public data.

Semi-private data and private data are data that we don’t talk about. All of
the examples that we use are almost all publicly available data and, in some
cases, semi-public.

Okay, so we’ve done this with DNA, and we’ve also done this with, to the
extent that I can polarize health information about you by knowing what
websites you visit. So we have done some work that was published in JAMIA about
disease predicting and disease following behaviors. But that’s kind of outside
the scope of here, but I just want to give you a sense of kind of the things
that we’ve done.

So I sort of tipped the iceberg a little bit about the kinds of problems
that we found. We’ve looked at all kinds of data. There is another great
problem, too, in clinical information that has to do with the clinical notes,
and that is if the notes are de-identified to the letter of the safe harbor,
they still leave lots of information on the table about who you are. So, for
example, the kind of information we see in clinical notes at the age of two,
she was sexually molested. At the age of three, she set fire to her home. At
the age of four, she stabbed her sister with scissors. Now nothing in that had
any explicit identifiers that would require anonymity for the safe harbor. But
yet, that gives you a sense of how many of those little details about your life
tend to show up in clinical notes and can be re-identified.

Okay, let me summarize so I can be quiet and take questions about what are
some of the lessons learned.

The first lesson I want to point out to you is that ad hoc techniques don’t
work. That what we’ve found, just like in the face example I started with, the
idea of putting bars over people’s eyes and so forth didn’t work. You have to
be able to prove that your privacy protection is sufficient.

Where do I see that happening a lot in the health data that we’re talking
about is the improper use of encryption as a protection mechanism. What we see
a lot is people will say I’m going to use a really strong hash functional or
strong encryption function, and therefore I know the entity is protected unless
you, if it’s encryption, unless you have the key. If it’s hashing, there is no
key.

And so the way we break these systems is quite easy. I simply will take the
hash function, I’ll try every possible social security number and run it
through here, and I’ll get an index. It gives me the unique identifier that
you’re using and matching it to the social security number, and I’ll try it for
all of them.

So one of the things, but how long will it take. I mean, social security
numbers are nine digits. How long does that take, say, on this Dell laptop, oh,
yes, this Dell latitude that’s sitting right here. Well, almost that exact
machine took me four seconds. So it gives you a sense that this idea of this
dictionary attack problem on encryption is a good technology used in a bad way.
So we’ve got to do better on that regard about ad hoc techniques.

The second lesson is one of the things that Privacert was very effective at
doing in the biosurveillance base was whatever knowledge it had, it had a more
global knowledge than the person who’s sitting here with only their data source
or only their kind of data. So even if you had all of the hospital discharge
data in the country, you still are only talking about your data, and really
these re-identifications are happening because of data not under your control,
data that other people hold who how it affects with your data.

And so that’s a serious problem that almost begs us to think in terms of a
more comprehensive approach.

Another problem is that it renders consent and fair information practices
pretty useless. One of the reasons that that happens is we can spend a lot of
time with consent and so forth – I’ll come back to the fair information
process in a second. We’ve spent a lot of time focusing on the person who
originally collected the data. Did I trust them? Did I therefore give my
consent? Oh, no matter how much factual information you say about all the other
places it might go, the truth is there’s a trust issue about the first person
who I’m giving the data to. The problem with that is we then scrutinize the
next level of data they give it to. So if this is the physician who I
scrutinized these type of networks and other people who are getting the data.

But once it gets to them, there’s no limit on that because there’s no
scrutiny beyond that, and that creates yet another problem that we’ve been
seeing.

Another problem with consent is that a lot of work that’s come out of the
economic community with respect to can people actually make rational decisions
on data with respect to consent. So there are tons of them where people have
gone to a corner and I’ll give you a dollar for your social security number, or
I’ll buy you a hamburger if you tell me all your medical data, things like
that, and people will overwhelmingly say yes. Most likely, many of us in this
room have participated in something similar by using your loyalty card at the
supermarket. Yes, I’ll take the discounted rate, and I don’t care if you track
my groceries. I don’t care. It doesn’t matter to me because in some sense we
can’t phantom what could happen to us down the road. That is, in some ways if I
now tell you some of the uses that people have tried to put grocery purchases
to, you would almost scratch your head and say, oh, my God, I would have never
have thought of that, that someone would try to correlate absenteeism at work
with junk food purchases or diseases with junk food and diet and things like
that that are purchased there. And, of course, the more famous ones have been
the ones of purchasing condoms and things like that when a person’s Catholic.

So the problem with consent is you’re asking an individual who is so
ill-equipped to know the long term ramifications of the decision that you’re
actually asking him to make. And then even if you write it out and even if you
sit down and you guide them, it’s not even clear that many of the organizations
that are receiving the data can understand the long term ramifications of the
very policies that they’re advocating for.

This begs for policy, and that’s the role of health policy is when the
individual can’t make that choice, when the organization under their best
attempts can’t really make that choice, we have to come up with coherent
policy. I’m not saying what that policy is, I’m just saying – I’m just
dumping it on your door.

Also, there was not much talk from what I heard about fair information
practices. But I think that probably because what I heard in the early
conversation, people recognized they’re just not going to be useful.

But it did beg out the question I heard in the earlier presentations, the
panel right before me, is, okay, fair information practices are totally going
to help me here. But what actually might be alternatives. Is there a notion
that after I opted in, I could then opt out and you’ll remove all my data. Can
you tell me all the places that my data went so that if I later did have harm,
I could actually have a trail of loss which I don’t have right now.

So let me give you an example. There was – it’s been a debated issue.
It appeared in the Journal of JAMA, the Journal of the American Medical
Association issue about a Maryland banker who took a list of people who had
mortgages at his bank and crossed it with information he found on a publicly
available cancer registry, and then if he got matches, he began discrediting
their credit basically calling in their loan, calling in their mortgage. So if
you are one of these people who had cancer, all of a sudden you don’t know why
it is that the bank is getting all testy with you and trying to un-do your
mortgage.

The problem – I don’t know, it’s been contested whether that was true
or not true and so forth. I can’t speak to that specific example. I can
absolutely tell you it certainly is easy to do. I can tell you that I’ve had a
lot of anecdotal conversation with people who seemed that they may be doing
things like that.

But that doesn’t help us here in this conversation. But what does help us is
to be able to say, you know, if we don’t allow an audit trail back for patients
to say where the data went, then in fact that person has no course of action
because there’s absolutely no doubt there are fantastic benefits from sharing
health data. I mean, I think that the panel before me just the eureka in terms
of, gee, look at all the things we could do. Look at these great things that
could benefit society. But in the United States, people make hiring, firing and
promotional decision. When those surveyed, the top Fortune 500 companies and
something more than 30 percent said that they do use health information to make
hiring and firing and promotion decisions.

So the problem is that the down side risk like the Maryland banker example,
like the Linnow study is borne by the individual. And it’s great that society
can reap the benefits of my medical information. But at the end of the day, I
still want a job. I still want to be able to live in my house and so forth. So
we have to kind of balance this out.

So the last lesson that I want to point out is that one of the things I
don’t like about my talks is a lot of people focus on the re-identification
problems and not the thing that I think is the most important, and that’s the
balancing that the technology often does. It’s really, with the kind of
technology and solutions that we’ve been able to deal with and develop are
solutions that can nuance the utility and privacy at the individual level,
something that policy in general can’t do really well just having a blanket
statement.

We do have a grant from the National Library of Medicine to do more work in
this area on the kind of data that you’re talking about for these technologies.
But obviously, that’s not on the table right now.

So let me end with that, and say thank you. This is my email address if you
have any problems finding the slides and the subject on my website.

MR. REYNOLDS: Wow would be my first comment, and then I’ll open it for
questions from the Committee.

DR. VIGILANTE: Just a very simple minded question here. So am I hearing you
say that basically nothing in our current standard is actually not
re-identifiable? When we say re-identified, it’s really, well, I guess I’ve
lost my sense of, you know, the question of risk here in my, my internal risk
calculus, I’m a little bit askew, and I’m having a hard time reorienting. So
when you say the HIPAA safe harbor, are we talking about those 18 fields that
everything gets, now remind me, in those fields, are we obliterating those
fields, or are we providing vague aggregated versions of those. Because in
other words, are we obliterating date of birth or are we providing an
aggregated version of that.

DR. SWEENEY: In the case of date of birth, we’re providing age.

DR. VIGILANTE: Right, providing, okay.

DR. SWEENEY: It’s kind of pretty much the nightmare for anyone who wants to
use the data because it’s pretty much get rid of everything that seems useful.

DR. VIGILANTE: Right.

DR. SWEENEY: And –

DR. VIGILANTE: What are we doing for zip? Are we doing –

DR. SWEENEY: Zip three and zip two in really small communities.

DR. VIGILANTE: Zip two, okay.

DR. SWEENEY: Well, primarily zip three for most of the U.S.

DR. VIGILANTE: All right.

DR. SWEENEY: I found glossed into that slide.

DR. VIGILANTE: So you go to year or age, zip two or three and gender, you
get a 0.04 identifiable?

DR. SWEENEY: That’s right.

DR. VIGILANTE: Point 04 –

DR. SWEENEY? Point 04 percent of the U.S. population.

DR. VIGILANTE: Okay.

DR. SWEENEY: It’s not given that there are 20 million different people.
That’s not that kind of number. But I think what you’re getting to and I hear
the push back is that it’s not really no longer a black and white issue, which
it never really was. It’s really about drawing a line somewhere, right, and
HIPAA not knowingly just following a policy prescription, ended up drawing a
line right across the 0.04. But in fact, you know, we could do better in other
ways.

DR. VIGILANTE: So actually that doesn’t feel that bad.

DR. SWEENEY: 0.04? Unless you’re one of those people.

DR. VIGILANTE: Right, but it just – it’s probably a risk I’d be willing
to take. I mean, I’m just speaking out loud. I mean, —

DR. SWEENEY: No, this is good, this is good. So you wanted to take –

MR. REYNOLDS: Are you having your own chat there?

DR. VIGILANTE: But actually what does this mean to me? How does that feel?

DR. SWEENEY: No, I think that’s cool. So then what happens, though, is I use
the other part of the fields that are still left in tact, and I then come up
with these numbers.

DR. VIGILANTE: Like diagnosis.

DR. SWEENEY: Yes. And certainly what I want to say you can’t include
diagnosis. So it just begs that that kind of prescription policy is not really
the right thing.

DR. VIGILANTE: Right.

DR. SWEENEY: I think that’s the thing because you might be okay with, you
know. 0.05 percent of something, but you know, it’s a 10 percent for a bin size
three. So for marketing purposes, that’s a no-brainer. Marketing will take
three. In other words, after you’ve mailed three times as many pamphlets
because two of them are going to the wrong person, and you want it to go to the
right person.

So let’s try to work on that compass a little more. One thing that’s really
alarming is when I go to the EU, in Canada the privacy commissioners in Canada
commission various people across Canada to try to repeat a lot of these studies
in Canada, and the numbers look pretty like 0000s, and you say, well, why, how
come. And it’s because they have a comprehensive perspective. Because they
have, they have this idea this is what personal and health related information
is. If you have it, you’re subject to this rule, and this is the rule. And as a
result, it sort of gives it global coverage because these problems are
happening because this is in the hospital discharge data, or they’re happening
because of the existence of a pharmacy claims data, things that are sort of
leaking out in other ways.

And so in Canada, they don’t get to leak out. They’re all covered by the
same group, and then it becomes easier to make useful data more readily
available.

MR. REYNOLDS: So to use that here like our privacy letter was that
everybody’s a covered entity in that case, maybe –

DR. SWEENEY: That’s right. That’s exactly right.

MR. REYNOLDS: Simon, you had a question.

DR. COHN: Yes, and actually thank you very much. It’s been very useful sort
of a reminder of the environment that we have, and I think, like Kevin, I’m a
little shocked because you think about it and you realize all these things.
Your government obviously can get data, and these are all obviously all
preclusions we don’t think about it a lot.

Now you assured in our earlier conversations when we were talking about the
whole issue of pharmacy data, and I noticed you began to show some data on all
of that, and maybe I’m asking a very simple minded question. But obviously
there’s a whole intent very appropriately beginning to do a lot of evaluation
with the provider as the unit. And I would just sort of listening to you and
listening to that. I was obviously trying to think in my own mind if there are
particular privacy concerns because we’re basically dealing with relatively
small sample sizes, especially when you start getting certain things happening.

Now, of course, you obviously are also throwing in all of these other things
that we can link. But even without that, I mean, are there from your
perspective, are there sample size issues and cell issues that are just sort of
obvious and no-brainers in all of this?

DR. SWEENEY: Well, I mean, the data and the web logs data used very
different kinds of attack strategies, different kinds of algorithms. The
algorithms that I think were kind of really on the table are these linkage ones
because they’re just sort of easy to do, and they’re just sort of sitting out
there.

The problem with the provider data is that the question is where the
provider ID then becomes the key. The key to how many things? And so if I can
get the provider’s ID number and a list of licensed physicians with the
provider’s name, then that tells me I think I can get the provider’s name.

If the provider ID shows up in hospital discharge data, then I can use the
provider ID as a way to link to the general practice on location. So in some
sense, and I don’t want to speak out of school because I don’t know the answer
to those questions. But if you were to ask me, gee, what do you think the
vulnerability is using provider ID as the main key, right away I can say, gee,
I’ve seen that in pharmacy claims. I’ve seen that in hospital discharge data,
some of it, not all of it. I’ve seen it in quite a few places. Gee, what did it
actually buy me. Isn’t it at the end of the day going to look like some two
stage or three stage link.

I don’t know the answer. But I would sorry. Anything that has that kind of
key or cross data sets can be a problem. The kind of thing that we’re doing
with the NIH grant is we’re actually thinking, we’ve already come up with some
ways, but we have to test them out and we have a couple of test beds in the
country, of leaving the data where the data sits, and we’re able to answer any
query off of the data and make sure that the result you get doesn’t create a
privacy problem. So that is one. All of the research that was talked about
before that absolutely happened. But we want to do it where you don’t have to
have a copy of the data on your machine to do it. The original data still stays
at the data source, but we provide you with a mechanism to answer queries from
it.

Better than the kind of thing you get now where we can only pull down
certain numbers. There are some of these kinds of things already existing out
there that you can only answer how many people with disease X. We need
something richer than that, something as rich as the kind of things that get
published, could researchers use data, get data that way. Again, that’s
speaking out of school. We don’t have those rules.

DR. COHN: I have just a follow up, and again maybe I’ll look to all my
colleagues in the U.K. only because, I mean, thank you for introducing this
concept because I think part of our mandate or request is to sort of try to
identify approaches to mitigate risks, and obviously this is the type of tool.

I guess I’m looking at the U.K., the last time we heard a representative
from the U.K. talking, they were actually talking about that sort of an
approach. And I’m just not sure that I didn’t actually, maybe that’s implicit
in what you were describing where requests are sent to a specific spot with the
answers coming back which I think you’re commenting on.

MS. JONES: Yes, that’s the expectation certainly with the secondary uses
service if you’ve got this roll base access. You’re actually only getting
access to the sort of the mark which is essentially is a query engine anyway.
So you’re not actually, you’re not pulling the data, although there are
opportunities to have a sort of an extract. But that’s a very sort of clearly
defined process in order to go through to say this is the data that you’ve
actually requested. This is what you’re using it for, and this is essentially
what you’re getting, and there’s a sort of almost a sort of a contract, really.
But in terms of actually the access, then it’s very much that kind of model.

DR. SWEENEY: One of our challenges is a little different than the U.K.
because you at least have a centralized data source from which query can start.
We assume that there is no such centralized data, that in fact the data over
which we want to query is maybe across the country, maybe across the region,
and I need the same query to link on this same patient, give me the accurate
information I want across different data collections.

And what we like about that system is the fact that I can use explicit
identifiers of the patient because after all I’m not going to give you their
social security number, but I could use it because I could use it in a very
accurate way. And in some sense I think we’re going to be able to show the data
results we’ll get are better than data where privacy try to prescribe and then
you do the research off of that.

MS. JONES: Can I just ask one sort of question. It’s really just –

MR. REYNOLDS: Yes, this will be our last one, and then we’ll break for
lunch.

MS. JONES: It’s about the identifiable bit, the sort of zip code, year and
gender there because we consider in the U.K. that to be an identifier rather
than a sort of de-identifier. And so, well, we haven’t actually gotten an NHS
number, then we actually use those three items. But that is considered to be
identifiable data. We use it to identify. So it would never be put in the
context of non-identifiable.

DR. SWEENEY: Yes. Well, we’re different.

MS. JONES: Okay.

DR. SWEENEY: We’re kind of like the wild, wild west when it comes to data.

MS. JONES: I think the thing that’s really been in my mind is the threat,
the identification of the threat which I find absolutely fascinating in these
sort of down the line sort of uses of it, and that it’s really saying, well, at
what point do you say I have to take that risk and what don’t you do. And I
think with a national health service, we’re a like position that there’s not as
much of a threat in the whole thing.

MR. REYNOLDS: Thank you very much for a really compelling discussion, and
thanks everyone. We will be back at 1:45 per that clock.

[The meeting has adjourned for lunch.]


A F T E R N O O N S E S S I O N

MR. REYNOLDS: Okay, I hope you’ve digested your lunch and what went on this
morning. Okay, Justine’s willing to commit to half of it.

All right, our next group is going to be discussing the technical solutions
for consent and other HIE issues, and we’re going to have Jonathan White go
first since he has other pressing matters that he will go with. And so with
that, Jonathan, why don’t you go ahead and get started.

Agenda Item: Technical Solutions for Consent and Other
HIE

DR. WHITE: Hi, I’m Jon White. I work at the Agency for Healthcare Research
and Quality, and today on Oprah, Stewardship Entities and the People Who Love
Them.

I have had the pleasure of coming to you before to talk to you about the
concept of health data stewardship and, in particular, an RFI that the Agency
released back in June on the concept of a national health data stewardship
entity, and I’m going to give you a brief update on that and where we are.

MR. REYNOLDS: Can we move your microphone over a little bit closer so that
when you turn, if you’re going to be looking at your screen, we’ll make sure we
hear you. Let’s do that. We’ll put one on both sides of you.

DR. WHITE: So I’ll make you laugh. For my fellow IT folks, in medical
school, you know literally from the first day I was the guy that when the slide
projector broke, everybody turned and looked at me said, Jon, go fix the slide
projector which is how I ended up where I am today.

I start with a quote from one of the masters of literature, Mark Twain,
“Persons attempting to find a motive in this narrative will be prosecuted.
Persons attempting to find a moral in it will be banished. Persons attempting
to find a plot in it will be shot.” These are the opening words of
Huckleberry Finn, and from even before the release of the RFI, accusations were
hurled of our contending to establish a national health data stewardship entity
throughout the world. And I have tried to make it as clear as I can from the
beginning that we have been involved in conversations that discuss the concept
of folks who aggregate date and how you should be treating that data which is
the concept of health data stewardship. But I have no plans to do anything
further with this data other than to summarize it and make it publicly
available to richly inform the discussion about this because this is a very
rich discussion for topic with very strong considerations on both sides, as you
all have been hearing over the past several months, and therefore I have no
ulterior motive other than to hopefully, you know, contribute a richer
understanding of the concept of data stewardship and, you know, certainly not a
comprehensive understanding of what opinion is on the subject and knowledge is
on the subject, but at least a more comprehensive understanding of what’s out
there, what people’s opinions are which is the reason for doing a request for
information publicly.

So the question that NCVHS had posed today was to ask what lessons and
experience, what experiences can we provide relative to oversight or data
stewardship. There’s a list of these, but this is the one that appeared most
relevant to what we were talking about. So I’m going to talk about the RFI. I
will preface this by telling you that the summarization process is not yet
complete. We received all the responses by July 27th. So it’s been
about a month. We have received over 100 responses which I’ll get to in a
second, and several of those were very lengthy and some written by some people
in the room, and they spent a lot of time doing that, and do justice we’re
taking a while to be able to try to summarize them.

So in particular for the RFI, the concept of health stewardship data entity,
at least for me, really first of cropped up in the context of the AQA. Now I’ve
talked to you all about the AQA, so I’m only going to briefly recap.

The AQA has three work groups. I was part of the data sharing aggregation
workgroup which talked about bringing together all pair data for the purpose of
performance measurement for individual physicians, originally ambulatory and
then expanded to all types of physicians.

In addition, there were two threads that fallen through the work of that
work group. The first is how do we aggregate data for that purposes, and the
second is shouldn’t we be kind of careful about that, and what are the issues
surrounding that. And we first talked about the issue of data ownership which
in a digital day and age becomes challenging as that data becomes instantly and
on a vast scale replicable and therefore the concept of which I’ve also talked
to you all about before about the concept of stewardship of data. And
stewardship for me means taking care of something you don’t own. So that’s a
succinct way to say it.

There was about two years’ worth of discussion on the concept of data
stewardship at the AQA. Some documents were arrived at that were fairly well
thought through, and then the group felt that it was time to ask for broader
comment on the concepts that were contained in those documents.

And ultimately the group arrived at the conclusion that the best way to do
that would be to issue a request for information released publicly in the most
public way an the most public way that the group could think of was through the
Federal Register.

So seeing as federal agencies are the ones who publish in the Federal
Register, AHRQ was nominated to publish this. The group reviewed several drafts
of the request for information that was released out in early June, and two
months were given up for a response. And on July 27th, it closed.

The topic was health data stewardship, and in particular, crystallized
around this concept that was advanced in the context of the AQA of an entity
that performed the functions of health data stewardship or advanced the
principles of health data stewardship, and, therefore, that was the example
that was given in the RFI and much of the supplementary information related to
AQA documents that laid that out.

So the primary purpose was to gather information to foster broad stakeholder
discussion which is a function that AHRQ fulfills evidence generating agency
and the convening agency. Supplementary information is from the AQA. There were
25 topics for discussion. I’m not going to go in depth into them. I know you
all have seen the RFI before. I do want to point out potential respondents, and
we really tried to call from across the spectrum of stakeholders from health
care. We were hoping that we would get respondents from providers to peers to
government agencies to creditors to you name it. So we were hoping to get a
wide variety of responses.

So the responses that we got, we got over 100 responses to the RFI. Several
of them, the majority of them came from private citizens, and the majority of
those were form letters that were sent in. There is a group that saw the RFI,
generated a response to it, and for their members generated a form letter to be
pasted to represent their point of view.

So I would say that the majority of the responses fell under that category,
okay. Now that said, there were a number, probably in the range of dozens, of
in-depth, detailed responses ranging from ten pages to up to the 50-page limit.
So there were a number of fairly hefty responses to this RFI.

I’ve provided a list of the types of respondents up here. They include
providers, peers, patients and their advocates, creditors, industry, state
government, what I euphemistically call the quality enterprise and I’ll
characterize each of these in a second, and health care organizations. Examples
of providers and provider organizations that responded would be the AAFP, the
American Medical Association, the American College of Physicians, the American
Academy of Pediatricians, American Osteopathic Association, the only hospital
association, although not the AHA or the FHA, FAH, Federation of American
Hospitals or American Hospital Association.

Peers included Blue Cross and Blue Shield of America, patients and their
advocate, in addition to the individuals that responded. These would be folks
along the lines of the World Privacy Forum, the Org Chart Frontier Foundation,
the Institute for Health Freedom as well as a number of consumer labor groups
include SEIU, the AFL-CIO, organizations like this.

Creditors, JACO submitted a joint response with NCQA and NQF. The first two
primarily, but to an extent, the Joint Commission are folks that I characterize
as the quality enterprise, you known, when we try to, Carolyn and Clancy and I
when we talk about this a lot, we try to figure out who’s when we talk about
quality and healthcare, you know, we talk about this loosely organized quality
enterprise. And these include folks like AHQA, NATO, National Association of

SPEAKER: [Inaudible]

DR. WHITE: Thank you. The Leap Frog Group responded, Academy Health
responded, West Virginia Medical Institute. So a number of folks from quality
enterprise. State government was California. There were two folks from two
entities from California that responded. The California Health and Human
Services Agency Office of HIPAA Implementation as well as the California
Insurance Commissioner. I thought that was interesting. And then finally health
care information organizations which include Connecting for Health offered a
very substantive response. AHMA offered a very substantive response, and so
that was outstanding.

I briefly, you know, I’ve talked about the type of respondents. I briefly
want to talk about the non-respondents and I was hoping at this point that some
of my federal friends didn’t respond. I can probably understand why they
didn’t, but I was hoping that we would get some other federal agencies without
naming names to respond, but they didn’t, and that’s fine. You know, again the
purpose of this was not to be completely comprehensive, but it was to try to
get a more comprehensive picture of what’s out there.

I said that the responses are not yet finished. Let me tell you what the
next steps are, and then I’ll give you, I’ll try to give you a general sense
for what I’m hearing and as I read through these because I’m through them all
yet. I do have a day job still, although maybe not long after this is done.

So eventually by October, the responses will each be individually posted on
I think the AHRQ website. Unless I have a better solution, we’re going to post
them on the AHRQ website which is a federal site. We’ll remove identifying
information like emails and stuff like that, but we’ll keep people’s names, and
I want them to know that their response was heard.

And also we’re going to put together what I’m calling a qualitative summary,
okay. We’re doing this through our national resource center. And when I say
qualitative summary, this is what I mean. I don’t want the agency to interpret
the responses, okay. I don’t want to say, well, based on this we think that
one, two, three, four should happen, okay. Again, the purpose is not for us to
draw conclusions and act on them, but to more richly inform the discussion.
That’s the point of this, and we’re serving as a science partner sort of to our
colleagues by doing this.

When I say a qualitative summary, I mean that this is not a democracy. This
is not each one response counts for one vote. You know, one response might
cover organizations that represent several thousand people. How do you know how
to weigh what against what. So instead, what we’re going to try to do is we’re
going to try to represent the range of responses, okay, to given topics for
discussion in a summary way, but nonetheless represent them so that all of the
ideas that are contained therein are well represented. And you can go to one
place. You don’t have to read through each one of these documents to be able to
get to it.

It’s quite an effort. It’s challenging to be fair, balanced, inclusive
without being exhaustive. But that’s kind of what’s laid before us. So that’s
fine.

We are planning on having these done and posted by October of 2007, so about
two months hence, and we’re going to formally present them to the AQA because
that was, again, the kind of the nitus(?) for all this and then wherever else
is necessary.

So that’s the formal part of the presentation. Let me jump and try to offer
you some general thoughts that I’ve been hearing back from the RFI, and then
hopefully we can have a good rich discussion about this.

I would say that fairly universally, nobody wants one big database in the
sky, okay. Many of the folks who read the RFI responded to it in such a way
that they had read these materials to indicate that AHRQ was thinking about
establishing a database of all health data, and you know, although that wasn’t
necessarily the intent, it is valuable to understand that there is a very
visceral and strong reaction that that should not be the case from a number of
different folks and for a number of different reasons. But I think that was
pretty clear.

There was a strong indication of the value of many of the secondary uses of
health data that you all heard about, okay. I mean, I don’t have to tell you.
You’ve been here, and you’ve been listening to it in a very comprehensive way.

So there were many respondents who saw great value and represented different
ways that, you know, data that’s generated for one purpose could be used for
secondary purposes that have benefit to society or individuals within society.
So that was clearly called out. Also clearly called out were the various ways
in which abuse could take place of that type of data, and, again, these are
things that you all have heard about. I’m not going to belabor the point.

What was interestingly clear to me is that there is a clear thought from the
folks who didn’t just say no, there was the clear thought that if you’re going
to go about this process, you must be extraordinarily careful, you must be
extraordinarily thoughtful in how such a venture would be undertaken, and you
must be impeccably transparent in the processes how decisions get made, why
they get made, by whom they are made, and for whom they are made, okay.

And I think it would be my personal observation to you that if anybody
broached the subject of health data stewardship in a national way and that it
was agreed upon that there was value in doing that, that such a group, entity
whatever would have to have what I call executive sponsorship which really
means support from kind of the gamut of health care stakeholders in this
country, okay, and that they would need a degree of insulation.

Folks who grapple with these subjects are grappling with extraordinarily
powerful, financial, moral, scientific concepts and issues, and political
issues, too. And really, without a degree of insulation for those folks, they
would be torn apart by the gravitational forces, okay. They just would be. You
know, it’s one of those things where you can absolutely see nothing happening
because of the weightiness on all sides of the issues.

When I say insulation from some of those forces, I don’t mean being
non-responsive to those forces. Those forces represent different points of view
which are very legitimate, okay, on both sides, all sides really. I keep saying
one side versus the other; this isn’t really one side versus the other. But
there are really legitimate issues that get brought up by people on all sides
that should be addressed and should be addressed thoughtfully. So that is a
really brief and concise summary. I would offer the final thought that I’m
really grateful to all the folks that have taken what is clearly a tremendous
amount of time and thought and effort and put their hearts into some of these
responses. I’m really impressed, and I’m very grateful to be working on it. So
with that, I would love to talk with you about it.

MR. REYNOLDS: We’re going to go ahead and ask some questions of Jon since he
needs to go. So Simon, I want to ask a clarifying question, and then I’ll take
other ones.

DR. COHN: Well, Jon, thank you very much, and it was good seeing you here.
It certainly sounds like an interesting set of pieces which I’m sure will be
de-identified by the time we see them on the web. I know that was sort of my
comment.

But now you know, as I thought about it and once again I’m remembering back
to the RFI, and I remember there being a goodly number of very significant
principles that you were asking for input on. And then I guess there was the
question of, well, what does this thing look like. And, of course, I would
observe that it’s one thing to espouse or somehow have an expectation that
organizations will adopt principles which is one form of data stewardship. It’s
a whole another thing to create another entity that somehow even without a
single database would be adjudicating something or other, and I’m not sure what
that would be adjudicating.

But you really actually didn’t comment a whole lot about the principles and
whether there was widespread support or at least the principles that you were
espousing, or were they just so obvious and so nice that it really was not
controversial.

DR. WHITE: Great point. So if you look at the supplementary information in
the RFI, there’s a proposed mission, proposed precepts, a proposed scope of
work, and then proposed characteristics of the hypothetical stewardship entity.
And it kind of reads like the scout law, trustworthy, faithful, you know,
things like objective, independent, knowledgeable, responsive. Most folks
didn’t respond to that. I mean, you know, to the extent that they did, they
said yeah, I’m mean all those make sense. All those are desirable
characteristics.

Most of them got straight to the issue of whether or not we should even be
talking about stewardship, and whether or not we should even be grappling with
that issue. You know, some folks said yes, we should absolutely be going there,
and other folks said no, we shouldn’t be going there. And both of them would
here’s why, and some folks said be really careful if you’re going there.

But as far as the principles, the proposed characteristics and some of the,
I would say that folks largely did not argue with those. As far as, and the
precepts would be fairly much the same to be objective, to weigh carefully to
bring about new changes. The scope of work was robustly discussed, you know. I
would say that there is, again, a variety of opinion on that. But should these
folks be involved in aggregation, should they not be involved in aggregation.
There’s some discussion of what methodologies should be used, but not terribly
extensive. Uses of data was widely discussed. But yeah, I would say that the
principles were not questioned largely.

MR. REYNOLDS: That was my question, too.

DR. WHITE: It’s hard to argue with the principles as they’re laid out there.
You know, who doesn’t want some more – yeah, it was more the issue of
should we even be going this, should we even be going there. And, again, to
just very broad brush, yes, no, be really careful was the responses.

MR. REYNOLDS: And again to ask it differently, then, going to centralized
databases or going to data stewardship?

DR. WHITE: If one is to go to any sort of database, whether centralized or
regionalized or localized, then issues of stewardship should absolutely be
applied.

MR. REYNOLDS: Thank you.

DR. WHITE: So there are principles of stewardship that should absolutely be
applied.

MR. REYNOLDS: Well, Jon, I thank you. We appreciate it. If you could speed
your date up, that would be great. If you’d go ahead and write it up for us
tomorrow, we’d appreciate it. Okay, with that, our next speaker is Assaf Halevy
from dbMotion. And, Jon, thanks.

MR. HALEVY: Thank you. So as I said, was relocated to the U.S. two years ago
and actually three weeks ago, moved from Atlanta to Pittsburgh following the
business of dbMotion. DbMotion is doing virtual patient records with the pure
focus on interoperability and shared medical data across either harmonized,
unified single enterprise such as UPMC in the example of Pittsburgh, or that we
are implementing right now, or in a very distributed, independent environment
such as the Bronx RHIO which is using dbMotion in order to implement Bronx RHIO
challenges around interoperability as well.

Feel free to interrupt and stop me with questions any time if you prefer
rather than at the end.

MR. REYNOLDS: I’d rather wait til the end, if we could.

MR. HALEVY: I’ll start with the first slide which is very simple, as you can
see. It’s just three colors, right. It’s purple, green and red, that’s it. So
in the next three hours, we’ll go over it. And actually, this is only the
context for what I would like to share with you. I think it’s important just in
a few words to understand what is it that we’re doing in order to create and
enable interoperability in a secure way, and that will allow me to use this
context while I move forward and share with you what I think I can.

Just two more words before doing so. What I think I bring to the table that
may be useful for the Committee is mainly three things. Personally, I’m a
computer science educated, and I spent the last ten years purely in shared
medical data in a very practical, incomplete world. We are running in
production in Clalit Services in Israel, which is a very large organization. I
want to say altogether we have the experience of about 70 percent state level
medical record sharing.

So what I bring to the table is maybe the practicality of really doing it
along with the scar tissues of it. Conceptually, gray area over here is
operational environment of the hospital, a group of hospitals, any combination
of inpatient, outpatient, ambulatory environment. What we do is we collect
information from different sources across different standards and so on into a
clinical data repository which is separated from the operational environment
yet is staying within the operational environment, the ownership, the control
of the existing organization. And we have those layers over here, which create
some intelligence around the ability to query records across those different
standards, different systems and so on.

So if you will, EpiCare, Sierener(?), Meditech, MISIS, Quest, all those guys
over here with their own world and right here is where we start to create some
common language. And right here is where we add some intelligence. Not only we
collect allergies from three systems, but we do it in a semantic way. We do it
in a way that at the end of the day the clinician can really look at a single
standard way rather than so here it is in a portal merging records on the fly.

Last thing about this slide is conceptually we create a virtual patient
record, which is actually assembled on the fly from those dbMotion notes or
sites in which we’re doing the colors and layers and stuff.

So just keep that in mind. The virtual patient record is what we eventually
deliver back to the consumer, and the consumer can be a web viewer, it can be,
of course, the clinician at the point of care with the focus of the point of
care, and, as you can see, it can be also research or DSS and so on.

In this conversation or presentation, I’ll focus on this blue box over here
which actually it’s an implementation and a design of a security framework that
I want to say already addressed quite a lot of the challenges around patient
safety, privacy, confidentiality, rules, profiles, permissions, authorization,
authentication in a central or in a distributed environmental interoperability.

I wanted to share with you just anecdotal few points that I just came across
when I kind of communicated over the years with organizations. Real stuff
happened for real. Authorized physician is looking at a VIP in this case
medical record simply because it’s interesting, of course. I don’t want to say
a quarterback or anything like that of any football company, football team, but
was fired because there was no justification for looking at this medical file
record aside of curiosity and the chance that, you know, this person was
authorized in terms of using name, password and so on. So it’s not in effect
breaching any authentication process or so, but just abusing it not for the
good reasons or right reasons.

Printed medical notes were found in a garbage can, again, lawsuit was filed,
of course, and consequences are over the years developing back and forth of who
is to blame and whose responsibilities and who is the owner. Medical errors. I
see personally that medical errors are sometimes related to access and
ownership and control of medical data in the eyes of the patient. At the end of
the day, my potassium is not 4.1, it’s 1.4. And if somebody messed with my
record, or if I changed it, or if somebody sent it to a central database, and
somewhere in some system tweaked and changed them, whatever, then somebody
needs to be responsible for those medical errors that derive from that, not to
mention the life-threatening situations sometimes, and potassium is a, I think,
a great example for that.

Advanced directives. What if I’m in ED and I’m unconscious, what can I do,
what I allow you to do, what I don’t allow you to do, whether the assistance
today and the solutions today and the approach and policy of the organizations
of the providers allow even the ability to enforce those kinds of things which
we all know that we need to at the end of the day.

OBGYN, very sad story that I came across. There was an abortion of a
teenager who had it, of course, confidentially, and against all odds eventually
the family practitioner shared it with a relative. And within this society, for
whatever reason, she lost her life because the family decided to take the law
to themselves and decided it was a shame to the family, and there you see maybe
the extreme, the top extreme result of confidentiality translated into really
sad consequences.

So what are the basic safeguards, and actually maybe the red parts are not
actually basic, but I want to say more advanced safeguards that we want to have
and we already have. Aside authentication and authorization which are given
people and colleagues that talked before me here talked already about roles and
profiles and so on. But I want to share with you on a different level maybe
about the same challenge because at the end of the day reality is with the
devil in the details.

So it’s not about roles that we want to prevent or allow or manage. It’s not
only about send your doctor, can do such and such, and if you’re treating a
patient, you can or cannot look at their psychiatric information and so on.
What about content? What is the definition for I as a patient do not let you or
allow you or permit you to look at my psychiatric information. Is it the fact
that I’m admitted at a hospital which is psychiatric institution, or is it the
fact that I have some diagnosis in my file somewhere that related or hinted
that I am. Or is a drug that somebody prescribed to me or dispensed somewhere
that, again, if you see this drug by itself, you can already tell that I have
some psychiatric background. And the same for HIV and others.

So my point is, it’s not always about let’s define the policies and the
permissions of users and patients, so to speak, and we’re good to go. I think
the challenge is more than that if we want to be really, I want to say, close
and cover all bases and make it happen the way that we really feel that we are
providing the right level of security and privacy.

And, of course, the opt in option, again, from my perspective and opting was
discussed previously in the morning session, for me opt in is something that
eventually needs to be translated to different levels within my clinical
domains so I will be able to or should be able to allow or prevent different
parts of my medical record maybe to a granular level of a lab result or a
granular level of a med that I’m on or not, to a granular level of my specific
physician or my currently treating care provider or in emergency situations
there is maybe a different set of permissions there I would like to enforce,
and so on.

So it’s not only about the way we manage the population, but more about
consequences, the context, the content and the semantic behavior of where data
is going and why. I’ll say a few more words about it later on.

The rest is pretty much obvious. I’ll skip those, maybe just the last one
which is distributed policies management which is a challenge by itself. What
do you do if you have a central database and maybe it’s a central policy
management. But when we look at the, for example, the HHS initiative now with
original initiatives and local initiatives, then sometimes the management or
the central management is at some point it’s no longer practical to do so. For
example, even at UPMC right now where I am responsible for the overall
operations, 19 hospitals, 40,000 employees. Who is going to manage this nurse
which is moving now from internal Department A to Department B within the same
hospital, or a physician that is working at Montefiore and then working at
Presby two days later. Who is going to manage this policy. Is it a central
thing, holistically managed by the enterprise altogether for those thousands of
people? Is it realistic and practical to follow all the changes and so on, or
maybe do we want to have some central capability and keep some distributed
local management to some super users that will help us to enforce everything
altogether and somehow not become too clumsy and blocked to eventually, as
again as was discussed in the morning, that the quality at the end of the day
will not be good enough to still use the data. On the other, that the
management and reality will not kind of prevent us from doing the things the
way we want and need to do.

One more thing I want to share with you as one of the ideas to address some
of it is the user, principal object. In our world, in our model, our
architecture, we have and we do create a user principal object for each and
every provider that is right now consuming data off a patient. This user
principal object has a lifetime and eventually it loads and manages all
permissions, authorities, and so on off this current patient provider session.

Whenever this UPO is terminated, destroyed, not valid and so on, the whole
access is prevented or denied, and everything is directed, audited and so on.
This is another mechanism. If you will, this is like a digital token that is
virtually following the data of the patient to the next interaction of whoever
consumes it, and it’s almost like a passport control. Whenever there’s an entry
point to the next consumer, there’s a passport control station validating the
UPO did match and in sync with the current permissions, authorizations,
policies, rules and roles of the current consumer.

So what is role-based access control, and you can see here some examples. I
want to say those are pretty trivial. I mean, the regular typical different
levels of roles of different providers and organizations and to eventually
practically have a system that lets you manage those levels of rules or roles
you would like to have.

Maybe more interesting is rule-based access control, and I kind of alluded
to that earlier with the content and semantic perspective. What about –
physician is not allowed to see patient that is not currently admitted at this
specific unit. What about nurses are not allowed to query the network or share
medical data, whatever you want to call it, after working hours. And the same
nurse, I have the same role, it’s the same policy, it just happens to be 5:30pm
and for some reason the organization will enforce different policies on
different hours on different users.

So, again, this is where the management comes into place. From now on, there
are no more bullets or no more text. So that’s it. Again, I’m very practical in
my approach and exactly the same time I’m saying it’s probably not complete and
we’re covering everything. But I think it’s very mature. You can see here I
kind a print screen of roles and rules management, and you see here, for
example, that we can really get down to for each role we can define different
levels of granularity of observations and encounters and so on, and be able to
tweak and change the level of definition based on that as a policy of the
organization.

Later on, we can add to that. In this example, a patient’s insurance card,
health insurance card is being used also as consent card, and that is a smart
card which is swiped, and if the patient is not swiping it or giving it to the
provider to swipe, then you don’t have access to my file.

However, maybe before I’ll speak about that, however, in emergency
situations, I do let you some default level of access to my record so in the
event I am unconscious or whatever, you get to see my allergies from everywhere
and still have some useful decisions you can make.

What you see here is actually that the system tracks and logs all activities
within the system, not only about the users or the patients, but also about the
way information is consumed. So in each and every time, we can tell, as you can
see here, browsing a web application that is consuming data, we can have them
log activities off lab results were viewed, which system called, what was
available, what was delivered back and so on. And exactly at the same time, we
can do the same for users and have the ability to monitor the activity of users
consuming data even to the level of did they print something, or did they look
at the screen and for how long. Did they log in, did they log out, did they
leave it unattended and didn’t log out and so on.

Conceptually, we can generate reports out of it, and you can see a lot of
tracking and auditing activity of who is looking at my record as a patient, or
who is looking at my patient’s record as a physician, and this is just an
example. So you can see in slides and search and filter it in those different
levels that you can see here. In production, our customers are learning a lot
and actually eventually using that also for quality management not only for
security management.

For example, they can learn how much time physicians are spending on looking
at medical records rather than or instead of treating the patient themselves,
and is it good or bad or less or too much or so on. Are they looking at meds
more than they’re looking at labs and so on. So we can learn about the trends
in work flow of the physician in practice rather than the way that EMRs are
dictating it, if you will.

When you combine everything I’ve said so far, I think the way to go is to
have the ability to generate retroactively the picture at point in time that
you want to go back in order to really enforce and make sure that you really
monitor everything together.

So we have this virtual patient object that we generated for a consumer, on
one hand, so we can go back to the log and see that. But that’s not good enough
because eventually what was presented on screen is what the eyes of the
physician and the decisions were based on at the end of the day.

But still that’s not good enough because what was the policy at the same
time. He was a doctor at the same time, but half a year later he was senior
doctor. And if we’re looking at malpractice events, half a year ago, we really
need to be able to combine all three things and be able to generate exactly the
snapshot of data and screen at that point in time. That will allow us to say
half a year ago that patient was such and such. Your set of permissions and
authorization were such and such. You looked at A, B and C, and those were the
actions, and the whole picture together will enable us to regenerate the data
exactly as it was at that point in time, although three days later maybe
somebody changed the lab results from 4.1 to 1.4 because it was a mistake. So
this is another flavor of using it actually in this case for malpractice and
legal reasons rather than patient privacy per se.

And, of course, the ability to do a lot of research and analysis reports on
the data that we collect and track and audit and, of course, learn from that
and get trends from that and so on.

If I focus more on the eyes of the patient, as you can see, then the patient
is very concerned who is looking at my data. And, again, reality is that my
data is not in a single lab. It’s in a lot of different affiliated and
non-affiliated physicians. It’s different EMRs, it’s physician offices, it’s
all over.

So who is looking at my data, and the approach is to create a PLV, a patient
log view and actually slice or carve out some parts of what I showed you
earlier and create a different flavor of that, that speaks to the patient
himself.

Conceptually, the patient will be able to look at track log view and, for
example, have an indication that those are the things that were viewed by Dr.
van den Reigen(?) in this example. So I can tell that Dr. van den Reigen(?)
looked at my virtual patient objects, and those are the parts of the virtual
patient record that he looked at. And I can, of course, drill down to that and
see more of it.

So security rules that we would like to apply from the patient perspective,
and you can see some example of that, are patient/physician relationship, are
you treating me, is there a good reason for you to look at my data right now or
next year or whatever in the future based on some events. Am I currently
admitted, and if yes, I want to do something. I want to define breaking the
glass kind of emergency access definition in case I’m an ED and so on. You can
see my data only if you belong to institution such and such and different
levels of rules that we would like to let the opt in action eventually to get
to this level and allow those kind of definitions to be enforced.

Treatment relations as an example can also be analyzed in different levels
as well if we really want to be practical. I’m currently taking care of the
patient is something that you can define and eventually monitor as an event
whether it’s happened or not, and you can see one more example is you can see
here. Only my physician can see my data, and the definition of my physician, of
course, can be something that I will pick out of a list or group of physicians
and so on.

The translation of those rules into practically doing it then, as you can
see here, we can define an opt in mode or opt out mode, and we can let the
patient say I agree that my personal health information will be shared in this
organization/organizations according to such and such, and that will be almost
kind of all or nothing definition.

But as I said, we can support the ability to start having some different
flavors and different levels of permissions into that and be very practical
about it.

At the end of the day, then the result of this whole mechanism should be to
look at the policies from the organization’s side and the administrative for
who is doing what and what policies are enforced from the
organization/organizations and look at the roles and profiles and activities
within the organization.

On the other, to look at the patient access service, we call it, which
actually creates patient access records and look at that and say what is the
patient saying about what can be and cannot be shared. And the group of rules
from both sides eventually need to be merged and compiled into a unified set of
permissions. And if you have the right framework in place, then you can
generate those and enforce them almost in real time on whatever is happening
right now.

So this is what I wanted to share with you was pretty short, and I’ll be
happy to answer any questions. The last thing I want to say is what I’ve shared
is actually, as I said, is based on a lot of discussions and events over the
years with our customers. One of the things that they had, and I shared it with
Margaret over lunch, is a committee that – I want to say, of course, not
in the same order of primitude in terms of the goals, but it was their
responsibility to be able to come back and define what is it that we’re going
to do with patient confidentiality and privacy and so on. And there were a lot
of different opinions in the committee way after we go live with an
implementation.

And I think that this is a very important point because the solution for the
field that I think we need to recommend must be flexible enough, we must
enforce the field to be flexible enough to support different levels of
adopting, so to speak, of those capabilities or concepts. Otherwise, we’ll
never be able to cover all different opinions, and we’ll never be able to have
a single opinion that everybody will agree because it’s different challenges,
and probably that’s the way it’s going to be.

MR. REYNOLDS: Okay, thank you. We’ll hold our comments til after Richard
Dick speaks, please. He’s from You Take Control.

DR. DICK: Thanks again. I appreciate the opportunity to be here. I
appreciate the invitation. You Take Control is something that I’ve been working
on for several years actually now. The concept is a very simple one but we
believe a very powerful one, and that is aimed at empowering the individual to
literally control who has access to their data, how it may be used, when it may
be used and so forth as an, what we call, an independent consent management
platform.

I trust that you’ve gotten some of the slides that we sent. I’ll very
quickly go through those, and then I want to give some pragmatic examples of
how it can actually be utilized and empower the individual in some significant
ways.

So the idea basically then is that certainly this statement from Forrester,
we believe, is coming true more so every single day, that fueling political
battles and putting once routine business practices under the microscope is
certainly happening.

Just a few key assumptions. The individual is the rightful owner of the
data. The enterprise that may be holding the data has a stewardship for it and
so forth. I’ll not go into each of those, but I do want to highlight this one,
and I’m sorry it’s cutting off a little bit on the left side there.

Covered entities and state and even local regs come into play. I can just
tell you that when HIPAA hit, and in fact there’s some very interesting studies
that have been published in the last couple years, in fact about two years ago
in the annuals basically pointing out how HIPAA’s shut down a lot of studies
and research. And in fact, because of some of those issues, of course, the IOM
has focused on some of those issues and have asked me to come and testify in
October about some of those issues as well.

But there are literally thousands of these unique consent forms spread
across many thousands of enterprises, and that has proven to be and will, we
believe, prove to be a major brick wall for a lot of enterprises as we engage
in all of this fluidity, if you will, of health information being able to flow
between institutions.

And I think it would maybe surprise some of the people even here on the
panel how many requests there are from external sources from all kinds of
bizarre places like the U.S. Department of Transportation and other places that
are obscure places that ask for health records. Let me give just one case in
point in my work with Brigham Women’s Hospital the HIM Department.

They have on average about 4,000 requests per month coming in from the
outside world – not doctor to doctor, not hospital to hospital, but other
external organizations asking for release of information, PHI, and those have
to be handled and addressed. So the numbers of those are not inconsequential.

Some key underpinning elements of independent consent management are that
the solution must never be perceived as the fox in charge of the hen house.
That is why it must be independent, and I’ll get to that in more detail here.

We really can deal in a case-by-case basis customized releases of data down
to the data element level here in number two and number three really. Consent
forms must be consistently maintained and updated as this is a very dynamic
world. State statutes and so forth come into play, and some of those, as you
well know, are far more stringent than the federal statute of HIPAA.

As you go to the U.K. and other parts of Europe and other parts of the
world, some of their regulations are far more stringent than anything we have
here in the U.S. The ability of the consent management platform to help
populate PHRs is one that I’ll be showing you as well.

Therefore, independent consent management makes the old paradigm of opt in
and opt out, we believe, really completely irrelevant and unnecessary because
we can get down to the individual data elements and complete audit trail
provided for all transactions.

It’s important that the data be used only as the individual directs, and the
independent consent management platform should be able to roll back, and this
is primarily for Marc’s comments and his great work on this area of coerced
consents. We believe that TPO unfortunately has been abused mightily, and that
it is so broad today that probably you can drive ten Mack trucks side by side
through it in most institutions, and they do routinely. And that through an
independent consent management mechanism, the individual should be able to help
roll some of that back to more reasonable levels.

This is a graphic illustrating basically how You Take Control works. There
are a bunch of folks on the left who want access to your data, and those are
myriad in number and applications. There are a bunch of folks on the right that
we call source data providers, those who hold your data.

“You Take Control” sits in the middle and holds none of your data,
I repeat, none of your data. But we do have the very important and critical
element that is your authorizations and consent to be able to release the data.
We also can provide the technology and do provide the technology to intercept
the incoming request protecting the source data provider, grab a standing or an
existing authorization. We have three different kinds of authorizations at
YTC’s broad categories, provide those and the request to the source data
provider, and they can then release that data with authorization, recapture
however the complete audit trail and make that available to the individual and
to the enterprise that released the information and so forth.

So that covers the basic attributes of You Take Control’s being the
independent consent management platform. And let me give a realistic example of
that. There are, as you are cognizant, many employer-oriented sponsored
programs to try to facilitate the aggregation of PHRs and other health
information. And, of course, when some of those have been announced, the
challenge has been presented. So what is there to say that you XYZ employer or
ABC employer won’t just look into that repository since you’re facilitating it
and will fire anybody you want at will because of what you find.

And there have to date not been very satisfactory answers to that, but enter
You Take Control as the independent consent management platform, and the
problem is solved because they can say with a straight face neither we, ABC
employer, or XYZ employer will get access to that sensitive information without
first checking with You Take Control where you can control what we do with your
data. And that through You Take Control and through your authorizations, you
will then be able to control who has access to that data and hence never
perceived as the fox in charge of the hen house as the employer might be
perceived of as those who hold the data might be perceived of.

I’d like to now show a couple of other examples of how it might be used.

And this is a very pragmatic example with one of our partners that we’re
working with right now, and I will generally refrain from sharing any details
about our partners right now. But that’s the graphic that I was just showing
you and why we’re independent and sit in the middle.

This is an example of an actual standing authorization that can be signed
and placed in your account as an individual at You Take Control. It is pointing
out that whether I’m conscious or not for the purposes of medical evaluation,
treatment, et cetera, you have access to just my pharmaceutical data, as an
example, for a period of a year from the date of my signature.

We support what we call five levels of paranoia in electronic signing, and
that is you can sign it with merely a click through multiple biometrics if you
are that paranoid about it. How serious are you and how concerned are you about
this document I’m signing, it’s up to you. But by signing such a document, we
can access and do have access and currently have a hit rate that I’ll show you
that’s pretty staggering, but here is the concept.

Having signed that authorization that You Take Control will permit not any
breaking of the glass, none of that is necessary, but rather an advanced
directive, if you will, it’s that sort of a thing that can sit there in my
account and be used to literally save my life potentially in an emergency
situation or even for any treatments where I may show up.

So in this hypothetical example, with our partner MedicAlert, the ED
Department at Florida Hospital is one that we’re using as an example, can use
the call center at MedicAlert or use their USB-based token that they also
provide. Either one will work fine, and we can work with all PHR vendors in
this same regard.

But we have access to and have under contract the most complete source of RX
data anywhere today. Here’s the scenario. So a patient, Lucy Williams, shows up
in a comatose state at Florida Hospital. They locate the patient’s MedicAlert
ID, their bracelet and use the call center for the USB device. They can insert
that into any computer in the emergency room and answer two questions: is that
really Lucy Williams lying on the gurney over there, did I get the right record
looking at her driver’s license or any other ID material coming off of the PHR
or coming off their other ID that they may be carrying on them. We can now look
forward to have the wrong data or the wrong patient, the right data for the
wrong patient.

And then verifying themselves and this is another very significant piece of
the puzzle, being able to authenticate themselves as a provider, and in less
than a minute, we can provide this kind of screen through MedicAlert of
allergies. Where does that come from? It comes from the personal health record
or from somebody like a MedicAlert. This area comes off of those. This area
comes off of the PHR.

But the most challenging aspect of PHRs is, other than, and I’m going to be
a little cynical here, forewarn you, other than the fact that they’re not a
legal record, other than the fact that they shouldn’t be trusted as being
current, accurate or complete by any provider, they’re just fine. So given
that, one of the real challenges is what am I pulling off of this PHR. How
valid, how much can I trust it. Quite frankly, the allergies are probably the
most trusted piece. Where do you get that? You get it from the patient anyway
typically.

Some 50 some odd tests can be performed, and how many patients have that in
their record, probably not too many, but some. But quite frankly, the allergies
are a piece that ought to be trusted or could potentially be trusted. This is a
piece of data that’s very hard to get. And if I were a surgeon, that data
element right there is going to tell me I’m going to have some real problems
because it’s such a high dose, and as a surgeon I can anticipate some real
problems.

The centerpiece here is what You Take Control can facilitate by using that
standing authorization going out and hitting that source of data, RX data, and
pulling up then the most current list of data. And in fact, it is so current
that it is every 24 hours it is updated by all 51,000 pharmacies getting an
over 80 percent hit rate and will soon be well into the 90 percent hit rate. It
is a secret database that hardly anybody knows about. Here is the audit trail,
and I’ll get into that database in a moment.

If Lucy Williams recovers from her trip to the ED and three hours later is
in recovery and gets access to the website or to her records at YTC, she will
have this complete audit trail as does the holder of the data including what
data was released, how it was released, and if she clicks on that, she gets the
actual authorization that enabled the transaction in the first place. So this
is a very pragmatic example that we are pursuing currently with multiple PHR
vendors.

Where does the vendor come from? This is where the data comes from. It is
totaling more than 12 terabytes, over 230 million Americans’ data, updated
every night by all 51,000 pharmacies who update these records. It has been used
for underwriting purposes in life, long term care insurance and health
insurance. It is, as I say, over this length it is updating all Americans’
data, and in this AMIS system alone, there are more than 100 million people,
and it is individual data that’s being used quite frankly in the underwriting
world right now and is changing the underwriting world.

So I wanted to have you understand what that scenario is. So basically
MedicAlert can through their call center place the request. We can intercept
that request, grab the authorization, bring it back here, hit the data source,
hand that data back to MedicAlert and the whole audit trail including the
envelope information that I just showed you and the audit trail is there,
potentially saving people’s lives.

Dr. Golodner earlier this year talked about these nine domains of privacy
and security. We address all nine of them in very powerful ways. So I wanted to
show that to you and basically describe a pragmatic example of how it might be
utilized, and we can work with ultimately we plan to work with anybody who’s
holding data including payers, providers and so forth.

We do believe that there’s a huge opportunity to roll back, though, this
coerced consent that the payers and others insist on and that there is a
mechanism like we can provide that’s a very practical way of, some very
practical ways of doing that.

Why don’t I open it up to –

MR. REYNOLDS: Yes, I’ve got one other thing to cover first, and then we’ll
open it up to everyone.

DR. DICK: Sure.

MR. REYNOLDS: Mary Jo, you were going to mention that we’ve got some other
written testimony that came in on this subject. If you would touch base on
that, and then I’ll open it for questions for all of our speakers.

DR. DEERING: Earlier on, the Workgroup said that it would be interested in
understanding about consent tracking technology that’s used within the cancer
biomedical grid. This is not – what you have is two items. You have some
slides that look like that that you actually don’t need to focus on.

The more important one is something that says CA tissue because it’s bio
specimens. CA tissue systems requirement. I’m going to take a very little time
to just give you a high little overview about what this technology does.

Right now, it is in place in 13, it’s 11 or 13 cancer research organizations
that ban together to do prostate research, and it will be offered to all NCI
designated cancer centers in the firs quarter of 2008. I’m just going to call
your attention to a few things. What is interesting here is that this applies
both to the secondary use of the initially connected specimen sort of like the
secondary use of data. But specimens are capable of giving derived specimens.
So you have secondary specimen. You have secondary use of secondary specimens
in a sense, and it’s not as complicated as it sounds. But I just found that
sort of interesting.

On the bottom of page two, and I regret that they’re not numbered, but Item
B shows you, for example, that there are a variety of different consents from
you can only use this for this purpose all the way down to you can use this for
any purpose in the future to please contact me for future research that you’d
like to do. You can even use it for genetic studies, et cetera.

So there are those different levels, and those are just examples of
consents. On page four, skipping a couple of pages, a couple things to draw
your attention to is, again, under item number nine, it does indicate that the
tumor tissue biopsy could derive RNA DNA or tissue microwave specimens, and,
again, you could track that. And then the very last line on that page does say
that while physically and technically in principle the actual physical
collection of the tissue and the consent may happen asynchronistically because
you go one place to get your tissue and to another place for your consent. It
does say the specimen may not be distributed until the consent status is
verified. So that’s another requirement.

Skipping just to the next page, under E, it notes that if you think of all
the consent tiers and the different uses that you could get multiple consent
tiers, and the data entry burden can get very difficult. However, there is
technology that enables you to cluster your data elements so that you can input
all of these multiple consent components at the same time.

Item F says that it’s absolutely essential that you be able to track consent
withdrawals which are considered to be permitted. On page seven, Item G, it
says that any kind of a specimen is being viewed, and the results of specimens
are displayed visually like a computer. The consent status must be displayed
simultaneously.

And then lastly, Item H at the bottom of that page, because you have enabled
these different layers of consent, you can then do future research based on the
consents available. So you could say let me, buy me a specimen that is a DNA
specimen where I can use it for this. Or you could say let me find one where
someone has consented to be contacted for the future. So, again, that can
facilitate your access for secondary use purposes afterward. Please don’t ask
me any technical questions.

MR. REYNOLDS: Thank you. We appreciate your input. Questions by the
Committee for either Assaf or Richard.

DR. CARR: I have a question for Assaf. Well, thank you to all the speakers.
This was most interesting. A question for Assaf is, is dbMotion in place, where
is it in place, and how long has it been in place?

MR. HALEVY: U.S. headquarters are in Pittsburgh. We’re in the U.S. in the
past four years, and we have our R&D Center in Israel. We have an
implementation at Pittsburgh UPMC, the Bronx. We have a project going on in
Belgium, in the Netherlands and in France.

DR. CARR: My question is, are you tracking, are you aware of unanticipated
consequences. For example, going back to the nurse who can’t access data at
night if she takes a night shift, I mean, it seems like there is a lot of
specificity in detail. But that introduces complexity in the sort of unexpected
changes in roles or need for access.

MR. HALEVY: Actually, it’s the opposite because the complexity is there in
the field. I mean, I think the complexity is exactly in the way providers are
practicing medicine. Providers are working in different units and different
organizations. They move and change roles and responsibilities. They treat
different patients. Patients are moving from one department to another, being
discharged to rehabilitation or to home. And then after one week, they’re back
at ED. Is that already a kind of, should be in the responsibility of the
previous encounter or not, and so on. I think actually the approach that I
shared is handling this complexity by the fact that you are capable of tracking
those changes and monitoring them in a way that you can always in a flexible
way be able to slice or take a time stamp and have a picture of exactly what
happened and who is doing what.

DR. CARR: But I mean is it interfaced with the nurse’s on call schedule or
something like that? How do you, what if a nurse is working the night shift,
but she’s not allowed to look at data in the night?

MR. HALEVY: Well, that means that when she logs in at the night shift to the
network, okay, and this UPO, Universal User Principle Object is being generated
for her for that specific session or treatment or data consuming that she’s
doing, right there the rule is enforced because then –

DR. CARR: So she doesn’t have access, or it makes an audit –

MR. HALEVY: It’s up the policy of the organization. It can do either or
both. It can prevent the access. It can just log the access. It can explain to
the user what’s going on.

DR. CARR: I guess that’s my question, though. How do you, given all of that
variability, how do you ensure that her need for immediate access to care for a
particular patient is not interrupted by any question answering on the
computer?

MR. HALEVY: Okay, well, in most cases, there is no question and answering.
It’s either the organization decide what prevents or allows access, and we
track it. And in all cases there is a kind of a breaking the glass capability
for all users. So if the nurse think at that point in time that she need to
have access, it’s quite and regardless of all rules or whatever because, for
example, it’s a life threatening situation or whatever, she can always just
check, break the glass check box and log in, and she will have access no matter
what. The system will then log and track that, and she will be asked to say why
are you breaking the glass. So it’s really from our perspective, we provide a
flexibility to the organization to be able to configure out the different
levels how annoying you want that process to be for the user from zero to 100.
It can be transparent to the user. For example, in Clalit Services, at some
point in time, the committee said we would like to notify the user that access
is prevented due to such and such, for example, that it’s the night shift and
you’re not allowed. The field then generated a lot of complaints like why am I
allowed and you are not in different levels of data that are exposed to
different users. Then the committee changed the approach and said we will give
always a notification that you don’t see a full data set. So we’ll tell you
what you don’t get. You’re not, we’re showing you lab results. Those are the
sources, and those are the sources that you’re not looking at right now. So you
know what is available and not, but you don’t know the reason, whether it’s
security or anything else.

Again, it’s a question of policy, the way they enforce it. From our model,
it has no influence on the way we can implement. We can support both.

DR. CARR: Thank you.

MR. REYNOLDS: Mark.

MR. ROTHSTEIN: Yes, I have a question for each of you. But I know your
slides are not available. But slide number 15 was the most interesting one, and
you sort of blew right through that. So I can tell you what’s on it. I can’t
read that. It’s so small. It’s the slide in which – well, I can tell you
what it is, and maybe you can describe it. It’s the slide that has the check
boxes for the consent.

MR. HALEVY: Okay.

MR. ROTHSTEIN: And there’s a whole list that I didn’t get to see all the
elements that they got to consent to. And the question that I had was, have you
implemented that part of your system yet, and how has that gone and so on.

MR. HALEVY: Well, first of all, the list that was described here is just a
sample meaning you can in your organization, you can decide to have different
set of rules that you would like to enforce. You will have the platform to
create them, and those will be the rules that will participate.

As I said, since we live in a world of central solution for UPMC as an
example and distributed solution by the way in our case for the Bronx RHIO in
which we have CDR in each hospital separately because it’s totally independent
organizations. We provide, we can have, for example, you will be able to say
those four rules or five rules, the state or the region or whatever is forcing
on everybody, those are read-only rules for all institutions. You cannot change
them. You must comply with them. Then you can add more that again the policy
allows you to have the control and ownership independently and create your own
rules if you choose and want.

MR. ROTHSTEIN: But I have a sort of practical question, and that is how do
you identify those data elements in free text, in so there’s several states,
for example, that have special rules with regard to genetic information. Well,
how do you define that? How do you identify it? How do you retrieve it or not
retrieve it? I mean, so there are lots of these kinds of issues.

MR. HALEVY: Yes, absolutely, and the answer is in the simple slide, the
first one, which I can again, it’s too many details for ten minutes, 15 minutes
presentation. But somewhere in the purple layers which are the data layers the
way we consume data from the operational environments, there is an UMS, Unified
Medical Schema, in which we create clinical domains mainly derived from the HL7
Version 3 ring in terms of the data model.

But I want to say it’s maybe taking it to a little bit more practical level
from the data schema perspective. At the end of the day, you map your private
world to the unified medical schema and the ring, and security and rules are
enforced on those elements. We’re not looking into the database. We’re looking
at our logical data model, which is linked and mapped to whatever you’re
consuming for real.

So to your question, you will look at the lab domain in which you’ll pick
biochemistry, and you’ll decide some level of policy based on population of
patients as an example. Cancer patients, whenever there are such and such, I
want to allow whatever. You enforce that on the business object and the data
object that reflect those. You don’t care at that point in time that
biochemistry eventually will come from Cres or will come from Misys or will
come from somewhere else or whether it will be reported as HL7 messages or will
use LOINC standard or will it be free text. You are absolutely correct that if
it is free text as an example, then the level of flexibility is limited, and
you know, we will not be able, at least not today, will not be able to parse
everything free text and understand the logic and compute in what you wrote.

But still we’ll be able to treat this free text element by itself as an
entity that we’ll be able to enforce the logic that you define and associate
with it.

MR. REYNOLDS: You said you have another question?

MR. ROTHSTEIN: Yes, if that’s okay. I have a question for Richard. You had a
very provocative statement that I wrote down, and I’m sure you intended it to
be provocative. And you said that opt in and opt out are now irrelevant because
you can get down to the individual data elements. And so I need to pursue that
for a second.

DR. DICK: I’m sure you do.

MR. ROTHSTEIN: Wouldn’t that only apply if you knew what you were looking
for. So here’s my example, okay. I applied for a job with a telephone company,
and my job’s going to be driving trucks and climbing telephone poles. And they
as a condition of the job, they make me sign – they requested I sign an
authorization –

DR. DICK: If you want the job.

MR. ROTHSTEIN: If I want the job, and it’s to disclose all of my medical
records. And so you’re going to – everything goes, right?

DR. DICK: Well, what I would argue in that case is, you know, as HIPAA says,
there’s each covered entity must have in place a minimal release policy. And so
is it really required that they have the entirety of your medical record, or is
it important that they have certain subsets of it that are germane to the tasks
at hand.

MR. ROTHSTEIN: Right.

DR. DICK: That’s the real issue. And what I’m saying is that with the
technology that we can provide, you can get down to splitting hairs and carve
only those subsets of the record that may be directly germane to the kinds of
information that this organization that is placing the request, the reason for
that request, and therefore I’m just saying that we’re providing a very
advanced mechanism that would enable that to happen. Whether or not the
individual would be able to insist that that happen is an entirely different
issue.

That’s why I also was saying being able to roll back what is covered under
TPO instead of it being so broad that the individual could conceivably get to
the point where they could control, yes, as a payer may need these kinds of
information, and I’ll sign up for this minimal set. But I’m not going to just
carte blanche give you access to everything the way you routinely do today.
There’s at least a mechanism in place that we’re providing that could begin to
roll that back especially if the public demanded it.

MR. ROTHSTEIN: Okay, so it seems that there are two elements that are still
missing. One is some legal restrictions on the employers from requesting
everything which they now can do in 48 states.

DR. DICK: Right.

MR. ROTHSTEIN: And the second is what we’ve called for, and that is the use
of contextual access criteria and research on that because the missing step is
suppose there was an authorization that said send everything that’s related to
my ability to climb telephone poles and do that sort of work. Well, then you’ve
got to translate that into some sort of algorithm that is going to be
searchable against the medical record, and that takes for 10,000 job
descriptions a lot of work.

And so you’re going to have to figure, okay, well there’s this orthopedic
stuff we need to send. There’s this stuff, maybe not this stuff, and that is
– we’re, I think, a long way from having that part of the solution
available, even if your technology will be able to sort of search and parse.

DR. DICK: Now one of the things that we provide is a very potent set of
metadata around the authorization itself. That’s something of real significant
interest to HITSP and others, and that is in a very pragmatic world you have to
be able to utilize the authorization to map to, for example, the various kinds
of data that may be stored within the enterprise that’s holding that data under
consideration for release.

And so it’s vital that that set of metadata be able to address those kinds
of issues. And I’d be glad to go into the technical details of all of that
outside of this meeting. But there are some very significant issues associated
with that, and we believe that there’s some interesting and innovative
mechanisms for addressing that in a more, I’ll call it, comprehensive way than
most have been able to deal with today.

MR. REYNOLDS: Thank you. Mary Jo?

DR. DEERING: I have one question for Richard which is in a way is also
perhaps for the Workgroup, and then one question for the Workgroup. Richard,
you made another provocative statement, I thought, that Mike was going to pick
up on when you said the AMIS database is a secret, well, first you called it a
secret database that nobody knows about, and then you told us what it was. And
you said that it has my personally identifiable information about every
prescription I’ve ever had. I want to know how did my personal health
information get into there? Where were the consents that allowed it to get
there? And where were the business agreements whereby you got it? And that’s
only half of the question.

The question for the Workgroup is, is this a gap in public policy.

DR. DICK: I will tell you that I was directly involved and designed and
built these AMIS systems, okay. I did not set out to build the ultimate Big
Brother, but that’s what happened, okay.

If I can take just a minute and explain this because it’s very –

MR. REYNOLDS: Don’t go too far.

DR. DICK: Basically, these AMIS systems are owned and operated by Engenics.
That’s the company we sold it to. As I say, it’s getting those kinds of hit
rates, and it’s used for underwriting purposes. It takes underwriting from 90
days to ten minutes. How did the underwriters get it?

When you apply for life insurance, long term care insurance, you always sign
that. You sign an authorization that says I will permit access to all of my
driver’s license information, my health information, my medical records, you
name it. Pretty interesting authorization, and it is used routinely to get at
medical records including all of this very sensitive pharma data.

It’s the drug, the dose, the strength, the number of days supplied, the
pharmacy that filled it, the physician who prescribed it, and their specialty.
And it is updated every night at two thirty in the morning over this link from
the production systems of virtually of the PBMs. It has all the mail order. It
goes back 60 months at least five years, and it is a pretty potent database. It
dwarfs our XHUB and anything else that’s out there today.

It is real. It’s operational, has been operational for about four years. It
takes underwriting from 90 days to ten minutes. That’s what it was designed
for. I said, I was in the –

SPEAKER: Insurance underwriting?

DR. DICK: Pardon?

SPEAKER: Insurance underwriting?

DR. DICK: Insurance underwriting like long term care insurance, health
insurance, and claims on property and casualty side, okay. And so with the
authorization, they have access to it, and they will only hit it when they have
the authorization. But it is sitting there in the PBMs. What is it to have data
in ten systems and what if they have it in 11 systems sitting on their
premises.

It’s just like that. They always have that authorization in place to hit
these databases. YTC has this data now under contract for other purposes, and
the use of it, what it could do in saving lives in emergency rooms is primarily
why I wanted to build this system, and then the PBMs quite frankly got scared
and said we’re going to keep this super, super secret and we’ll just use it for
underwriting, and I hit the wall, hit the ceiling and said, yes, it’s legal,
its HIPAA compliant, but it’s not right. I said what would be right you the
individual should be in control of who has access to this data and all your
data for that matter, and it got me on the course of how would you go about
doing that, and You Take Control is the result of that.

MR. REYNOLDS: Okay, Mary Jo, I’d like you to hold your question for the
Committee until discussion, okay. Justine, and then we’ll break.

DR. CARR: Thank you. Just a question about You Take Control. I’m not clear
about who all is using it today. It sounds like insurance companies. What about
emergency rooms?

DR. DICK: As I pointed out, we are working with several PHR vendors now and
MedicAlert is now going into a pilot and then production with their call center
and so forth for using You Take Control and introducing all their members to be
simultaneously members with You Take Control, about four million people that
they have.

We’re working with some other employer groups. So we’re just this fall
rolling this out in real time, okay. And we have several partners that are in
the wings that I’m not prepared to yet announce, but some very large
enterprises. Okay.

MR. REYNOLDS: Okay, with that, thank you again to this panel. We’ll take a
break until 3:45 and be back then.

(Break)

MR. REYNOLDS: It’s getting late. We don’t even know what seats we’re in.
We’ve been here way too long. All right, with that, our next testifier is Cindy
Brach from AHRQ. So Cindy, we really appreciate you being patient, and this is
kind of our last set of hearings on testimony. So we don’t want to cut anything
overly short, or we’d be writing it without knowing anything. So thank you very
much, and we appreciate your comments?

Agenda Item: Risk Communication Strategies

MS. BRACH: This is actually my first hearing that I’ve been at where they
played Vivaldi during the break. So it’s been very pleasant.

Mary Jo asked me to come and talk to you a little bit about what we’re doing
at AHRQ in the area of informed consent and authorization with a lens of health
literacy, and one of the many hats that I wear at AHRQ where I’ve been for over
a decade is sort of the lead for health literacy at the agency.

And for those of you not familiar with health literacy, this is the
definition that comes from Healthy People 2010, increasing Americans’ health
literacy is party of the health communication goals for Healthy People 2010,
and you’ll see that it actually doesn’t say anything about reading or literacy
or writing in there, that the concept of health literacy’s much broader than
that, where it is about being able to obtain and process and understand and use
effectively health information.

And the Institute of Medicine when it issued its landmark report on health
literacy in 2004 recognized that that also takes place in the context of
culture and language. So there’s some overlapping some of the issues around
limited English proficiency and cultural competence when we address health
literacy.

This is a graphic from an article by Dave Baker on the meaning of health
literacy that was published in JAMA in 2006, and it’s the best picture that
I’ve seen to date that really displays the interactive nature of health
literacy.

On the left hand side, you have what we typically think about the
individual’s capabilities, what they bring to the interaction. It has to do
with their reading abilities, their innumeracy, it has to do with their prior
knowledge. But equally important is what we have at the top and the bottom and
the middle in yellow which is the complexity or the difficulty of the health
messages, both print and verbal that patients are presented with. And it’s
really the interplay of those two things that produces health literacy, and you
can think of health literacy in the print area, you can think of it in the oral
health literacy area.

Having now told you that health literacy is a fairly broad and complex
construct, unfortunately all of our efforts to date to measure it have really
relied on the print literacy based concept. And in 2003, there was a national
adult assessment of literacy that was done, and for the first time as a
component of that, we had a measurement of health literacy and the aspect of
health literacy being the ability to read a document, chart, prose and be able
to answer some comprehensive questions about it.

As you can see on that far left hand bar that says total, only 12 percent of
U.S. adults are considered proficient in health literacy. So the sound bite
that HHS has taking away from that is almost one in ten of U.S. American adults
may face some challenges in obtaining, processing, using health information.

You can also see from this graphic health literacy has a great deal of
ethnic and racial disparities. So if you sort of follow those blue bars or
periwinkle bars at the bottom, you can see 24 percent of African Americans, 41
percent of Hispanic Americans, we have 25 percent of the American Indian and
Alaskan Native population who are at the below basic level. That’s the lowest
level that can even take the test.

And we have 66 percent of Hispanics who are at the basic or below basic
level. So when you think about health literacy, while the majority of people in
the lowest, below basic health literacy category are in fact white Americans,
that we see that it is disproportionately hurting minority populations.

Back in that Institute of Medicine report, they recognized informed consent
is one of the issues that present challenges when you’re dealing with a
population with low health literacy, and basically identifying that there’s a
fundamental mismatch between the complexity of those documents and reading
capabilities of the average American.

Now when we talk about the average American, let us remember that that means
that 50 percent read at a lower level than that. So we observe this problem.
There are a number of studies where research participants have been surveyed
after giving informed consent indicating that they really did not understand
what they were consenting to. And in addition to the fact, and this is also
true that there’s a mismatch of the reading capabilities not only on the
informed consent but on the privacy notice documents. And there was a study
done in 2005, Michael Pasheur(?) found that the average reading level of
privacy notice was above the 12th grade level, and that that did not
vary based on the local literacy level or the percentage of limited English
proficiency patients that the institution was serving.

So basically these documents are out there. They’re clearly not at an
appropriate level. And what’s more is the process, we know that informed
consent and authorization is more than just a form. It is a discussion, and
that discussion is not standardized, and that there’s no standard process for
verifying that the prospective subject really understands what they’re agreeing
to.

So the way I got into this issue was one of AHRQ’s staff members who works
on the privacy rule was discussing in an HHS committee about what concerns that
the privacy rule was having a chilling effect on health services research, and
that researchers were very concerned about this, and they commissioned a study
to sort of take a look at that potential problem and found indeed that health
service researchers were quite concerned. And one of the issues that they
identified is there were now these processes. They already had an informed
consent process. They had complied within a set of regulations and IRBs and
kind of knew that, and that was familiar. And then layered on top of that was
now this new HIPAA authorization process, and there was some overlapping
elements that they were different, and they were different forms, and it was
confusing for the research participants, not to mention the researchers. And
they said it would be really helpful if you could produce some templates that
integrated those two that when we need to obtain both authorization and
informed consent that we wouldn’t have to go through two separate things. And
not only would we like a combined form template, but we’d like it developed for
low literate audience because we, you know, want to be able to use people of
varying literacy’s in part of our subject population, and we want it throughout
the health services research we do because a lot of the attention that’s paid
in this field is related to clinical trials that has a whole layer of
complexity that health services researchers don’t have to grapple with.

So sort of grasping that low hanging fruit, we have proceeded to actually
develop a tool kit to help health services researchers in this arena. We
started with the contractor who did the study kind of took a first stab. It
came to me as the health literacy expert for the agency. I said not good
enough.

We rewrote them, and we said it’s got to be more than forms and examples,
built a toolkit that dealt with the process around it. We involved the Office
of Human Research Protection in it, got their comments. We sent it out to a lot
of informed consent and health literacy experts, got comments with that.

And then just last month, we let a contract to test this revised toolkit,
and I’ll tell you a little bit about that in a moment. But our sort of end game
in this is to do what I say practice what we preach which is to ask our
researchers just like we ask clinicians to take into account the health
literacy of their subject population, to make sure that their documents are
going to be understandable, and to give them this toolkit as a way of helping
them comply with that expectation.

Just to note that while there are sort of standard ways in which we try and
produce simplified materials, I don’t want to imply that it is an easy job. I
found it personally a very humbling experience. And so I want to acknowledge
that this is something that requires a fair deal of expertise and effort.

But just to give you an example of what difference it might make. On the
left hand column which is marked before, this is an actual clause that an IOB
has on their website as part of their template, and I can see a few of you have
read through it because you’re laughing already.

And then on the right hand side, you see an alternative which basically gets
to the essential elements. This is voluntary, and it’s not going to change
anything if you say no.

So it can be done, but we’re not used to doing it, and our lawyers are not
used to doing it, et cetera. So I mentioned before that the toolkit included a
whole process. And so part of that process is thinking about the environment in
which the informed consent and authorization discussion takes place, if there
are language barriers, taking the time to actually read the form out loud, to
review it, and then using a health literacy technique called teach back which
is a way of ascertaining whether or not people understand without stigmatizing
somebody to make them feel like you’re testing them or you think they’re too
stupid to get this.

So there’s certain ways in which we suggest in the toolkit which are
standard health literacy methods of getting people to sort of validate.

There are other things included in the toolkit that serve that purpose as
well. One is that we developed a certification form which is for the person
conducting the informed consent and/or authorization interview to actually
certify, and it sort of serves as a checklist that they followed the process,
that they did these things, that they discussed all the elements of the
informed consent, that they did the teach back to verify understanding. It’s
not to say somebody couldn’t check it off and sign it anyway, but it’s one step
to kind of say, okay, did you really go through this.

And the other is to use the informed consent form itself as a check on
whether the process has been followed where the person who’s giving informed
consent says yes, I understand these elements. Yes, they checked with me that I
understand it. And in fact, I’ve got to tell you that the idea for this came
from one of my former colleagues at the National Quality Forum brought in for a
presentation on the informed consent toolkit that they developed for providers
in doing a clinical consent. A Kinko’s Fed Ex sheet that she had gotten when
she sent her overheads, and on the sheet it says the person who has ordered the
copying says I am confident that the Kinko’s staff understands my order, that
they repeated back to me correctly what I wanted done, and they sign that. And
she said, you know, if Kinko’s can do it, we can do it in health care. So that
can be part of it as well.

We recognize that just creating a valid toolkit is one step, and that we
really need to be involved in getting this toolkit adopted and reaching out to
the wide variety of stakeholders at IRBs and research and in fact in our
process of testing the toolkit has been inculcating some opinion leaders and
getting people behind this, and Mary Jo’s going to speak a little bit about how
Dick Water(?) at the AMC that is leading an effort similarly to simplify
informed consent.

And we still anticipate that there are going to be resistance and in fact
have included something like in this informed consent toolkit recognizing that
people who want to adopt it are going to have to deal with their IRBs or their
lawyers, et cetera and have prepared them for meeting those objections.

So as I said, we are testing the toolkit, and not only are we doing
cognitive testing with a very diverse population prospective research
participants, but we are testing it with Health Services researchers and with
IRB officials. We want to make sure that this is considered feasible, that this
is considered useful, that we really can get some traction in picking this up,
and that part of that testing contract is to actually promote the toolkit so
that we can get that into action.

So in sum, I think that there are a few take away points for the Committee.
The first is, I think, self-evident and something that you probably talked a
lot about which is we are not accomplishing the goal for the privacy rule when
we hand a patient a totally incomprehensible piece of paper and say sign this,
you’re protecting your privacy.

And researchers feel totally locked in to this. As far as they’re concerned,
they’ve been told this totally obscure language can’t be changed a word and is
what they must do, and that we in the federal government need to provide some
guidance to help privacy lawyers realize that there are ways to be compliant
and yet be clear, and that we at HHS have a role to help provide those
templates and move that process along.

MR. REYNOLDS: Very good. Thank you, well done. Mary Jo, were you going to
cover similar stuff on the same topic, and then we can ask both of you, or do
you want to go ahead and –

DR. DEERING: I think it will all fit together. But I think all the questions
will probably go to her.

MR. REYNOLDS: Okay.

DR. DEERING: Yes, actually, I’m going to be schizophrenic now because I’m
three people at once to help sort of round out this –

MR. REYNOLDS: Will all of you hurry?

DR. DEERING: Yes, and in fact I’m probably better than having the real
people because I’m basically just an informed pre-reader of their material. So
I don’t know enough to amplify it beyond the time available.

And so in the order in which I’m going to proceed, first of all you have
something that looks sort of like this. It’s heavy text. It’s from Peter
Sandman who is really the father of risk communication. You have a set of
slides that look like this. So that should be the number two in your
collection, perhaps, because this is the order I’m going to take them.

And number three, you have a document that looks like this, and it’s
partnered with the blue and black document. The second one was from Howard
Dickler who made a presentation to another Secretarial advisory committee just
a short while ago, as a matter of fact, about the same topic that Cynthia was
just covering.

And the last one is from Susan Kleinman who leads a communication group that
was contracted in the financial sector to do exactly this kind of work. So I’m
going to start with Peter Sandman. And by the way, the field of risk
communication started way back three or more decades ago, and it came out of
the environmental health field, and it was specifically around the community
right to know and how to communicate risks to health. And it has been taken to
the health field, and it is usually about risks to health, not other
generalized risks.

However, what is so important about Sandman is that it was way back in the
1980s that he coined the formula that risk equals hazard plus outrage. And by
the way, it’s Kevin who encouraged us to have something from Peter.

But the point of that is that what he discovered is that people assess risks
according to matrix other than their technical or health seriousness, and that
these include trust, control, voluntariness, dread and familiarity. And that is
what the outrage, and those taken together are the outrage factors. And those
are as important to individuals as actual mortality or morbidity in determining
risk.

I’m going to very quickly go through his first six points which deal mostly
with the release or withholding of information by health care organizations. So
these are a little tangential to our work, but there’s a couple of points that
may be pertinent.

It seems to me if I had to summarize this whole first point is that when in
doubt, HHS should be biased in favor of transparency or release of information
over confidentiality in its dealings with organizations. And so they should
actually require information possessors to have a good reason to be withheld
and the default should be release.

Second, if it is being withheld, it’s very important to say why. The cost in
lost credibility, he says, and lost trust is much higher when withholding of
information seems mysterious or arbitrary. The third key point, I think, is
that whenever information release is not explicitly forbidden, it should be
required that it be released.

His next point, number four, I believe the message there is that information
should not be withheld simply because one thinks it may be misleading or
misunderstood. So this portrays a contempt that may in fact be much more
damaging than any misunderstanding.

I think the key point in number five is that uncertainty or the belief that
the information itself is uncertain is not black and white is not a reason to
withhold it, and he has some URLs where you can go to, to learn more about the
issue of the uncertainty of the information.

His sixth key point, I believe, is that when you have information that may
be presumed to be either misleading or misunderstood or uncertain, it is best
in all of those circumstances to go ahead and release the information and
discuss the possibilities upfront. He does then have three points where he
discusses informed consent specifically, but he notes that he is not actually
an expert in this literature.

But I think that they get at some of the points that Cynthia just raised.
First of all, he said that it’s important to be clear on what the goal is of
the informed consent procedures. He hypothesizes that you can have three goals.
You just want to do the minimum of meeting the legal obligation without
creating any hassles at all, and you don’t want the patient to think much about
the decision.

Secondly, you can have informed consent about informed consent which is, you
know, going to great discussions about the informed consent process without
caring how it comes out. And then the third could be to encourage or require
active consideration of the material itself. And whatever the goal is, and
there could be others, as he says, that informed consent protocols can be
developed based on established risk communication principles.

Next point eight, I believe, is that informed consent about releasing data
could also have different goals. It could urge the patient to let the default
be the decision whether it’s opt in, opt out, you don’t have any choice anyway,
whatever the default is. Your whole approach could be to try and just encourage
the default. It could be to urge the patient to either leave the default or
look closely at the choice, or to urge or require careful consideration, give
no advice on which to choose, or urge require careful consideration and give
advice about what to choose because you really do have a bias. But, again,
effective informed consent can be developed for any of those goals, and he does
believe that the goal should be to help patients make their own judgments about
trade offs, and he does have some suggested approaches.

And so there’s ample – he has a website. If you just go to Peter
Sandman on Google, you will find his website, and you will find a great deal of
information is publicly available from him.

Next to your slides from Dr. Howard Dickler, again this was on creating
informed consent documents that are approachable, readable and brief. Now his
first two slides, it seems to me, just talk about that many informed consent
forms don’t actually meet all of the requirements of the CFR in any case. Some
of them have missing elements, et cetera.

But going on to the next three slides they show that these forms are at way
too high a reading level. They’re usually at fifth to, they’re supposed to be a
fifth to the tenth grade level at most, as you’ll see in the one that talks
about the 2003 medical school websites. Their average readability, those are
the standards that were proposed by the medical school website group. But you
can see that the average score of most sample IRB text was in over the tenth
grade level.

And then the next slide also shows that by increasing the length of these
forms that they’re very intimating, they’re getting longer, by the way, and
that it raises a credibility issue, and people begin to think that, gee, is
there something hiding there.

The next slide says the results of shortening the consent forms, and he
shows that in fact the comprehension was inversely related to the length, and
that they tested and found 67 percent comprehension on the short form and only
35 percent comprehension on a long form. And the point of that last bullet is
that there were actually errors that occurred, errors in consent anyway, on the
long form because this was a test, remember, that two out of 22 volunteer
despite a contra indication that clearly if they had understood what they were
reading, they wouldn’t have consented. And five of the 22 missed fatal
reactions that might have occurred, and, again, this was a test sample just to
show what length and complexity can do. And then he goes to show some of the
good results from shortening these forms and lowering the reading level to a
little over the eighth grade level. And you can see that the comprehension
improved over 85 percent scored correctly when they were questioned afterward
about what was in it. So there were those benefits.

Then his next few slides talk about the AAMC deciding to take a serious look
at this and holding a meeting which was the beginning of a process that they
intend to complete to really promote the use of more effective documents. And
you see who the participants were there.

And then they go through some of the examples of some efforts the Children’s
Oncology Group and what they learned, and you can see the positive results
there. They mention the AHRQ informed consent toolkit as one possible example.

The next one shows a commercial IRB where they got it down to a simple
one-page consent for simple procedures research and the impacts there. They
talk about some of the obstacles that they encountered, and I’m sure it’s not
surprising that simply inertia is a big obstacle. Well, we’ve always done it
this way; it’s easiest to use the form that we’ve already had.

Also, as you’ve said, writing simply, clearly and concisely is hard. It’s
not easy, and it takes a lot of thought, and no one does it well unless they’re
an expert, and even then they don’t do it well unless you test it and test it
and test it.

So again, they’re coming up with an approach that would try to have a
three-tiered approach, three-part approach to have the first part would be very
simple, very succinct. The second part would be all of the supplemental
information, and then Part C touches on something that you mentioned which is
sort like the teach back approach to make sure that it has worked.

They have some next steps so that they’re actually beginning to develop
this, these materials, these toolkits establish a repository, begin to
implement them, work with NIH and the industry. And I’m sure that the last page
where it talks about how that advisory committee could help support change. The
same would probably hold true for this workgroup in that it can at least
support positive proactive action by OHRP, FDA, NIH, et cetera and promote best
practices.

The last one, I’m actually not going to necessarily summarize her testimony
because this is more how you should do good effective communication. But it’s
important for the Workgroup to know because you’ve already heard that there’s
good research about how to communicate effectively. This is a nice succinct
document that I think the Department could benefit from. What I wanted to touch
on just very briefly as a model, this document is the executive summary from a
report which was commissioned by six high level federal agencies in the
financial sector, the Federal Reserve System, the FDIC, the Federal Trade
Commission, the National Credit Union Administration, the Office of the
Comptroller of the Currency, the SEC. If those six can get together and say we
believe we have a problem about the readability of our financial notices and we
want to do something about it, I think that makes a very significant statement.

The way I heard it said through my contacts at the FTC when I first started
looking at this years ago was they believe that the complexity of the documents
in effect negated the principle and policy of the financial notice, that they
had just absolutely were not meeting the requirement. And so they indicated
that the preliminary report did indicate that it is possible for financial
privacy notices to include all of the information required by law in a short
document that consumers can read. It can be done.

And what they intend to do is they’re pursuing this project, and they do in
fact intend to explore a full range of options for improving financial privacy
notices in light of all their consumer research. So, again, the take home
message there is that you have to do the consumer research, and you don’t just
hire policy analysts to do the research; you hire people to do the consumer
research to understand what the problems are and what the solutions are. Thank
you.

MR. REYNOLDS: Mary Jo, thank you. I’m going to ask a clarifying question of
Cindy, and then I’ll open it, and I know Mark has a question and we’ll see who
else.

Cindy, we’re dealing with treatment, payment, health care operations, HIEs,
NHIN, research, marketing, commercial use. So few things. Do your comments play
across all those categories, or do you think that it ought to be in any
particular areas.

MS. BRACH: What we focused on was research and largely AHRQ is an agency
that funds and conducts research. And so that was what I was attempting to do
was making sure that we ourselves were walking the walk in terms of making sure
that we are being responsive to the populations we’re serving.

The next place that I would like to take this work in AHRQ is to bring it
more toward the clinical informed consent curve for clinical procedures and
sort of spread that similarly.

MR. REYNOLDS: Procedures or trials?

MS. BRACH: Procedures. In terms of trials, NCI had a number of years ago did
a very nice job putting together templates and materials for clinical
researchers to use. As far as I can tell, though, they haven’t implemented it
in a meaningful way, that there are still huge cancer trials that don’t follow
those principles, that they haven’t enforced on a large education effort to get
those researchers on board and the intramural efforts to simplify are way, way
behind what NCI already did. So there’s a lot of work to do in the clinical
trial side.

I think the principles that I’ve been working on operate as much for HIE as
it does for Health Services research. But I also want to acknowledge the
limitations of my expertise in that I haven’t tried to apply it specifically to
each of those designated areas that you just mentioned.

MR. REYNOLDS: Mark.

MR. ROTHSTEIN: Thank you, Harry. I have a comment and a question, and I’m
glad Harry raised that point because both my comment and my question apply to
informed consent in both clinical as well as the research setting.

And the comment is that I think increasingly in medical schools in teaching
medical ethics the topic that we’re addressing is not referred to as informed
or just informed consent. It’s informed consent and refusal to get across the
idea to medical students that it’s acceptable to present the options and your
recommendation to your patients and for them to decline for personal,
religious, cultural, all sorts of reasons. It’s okay for them not to go along
with your recommendation even though you think it’s a mistake medically for
them to do that. So I think that’s an important concept.

The question that I have refers to how you train the issue, I mean, it’s not
just informed consent, it’s knowing voluntary informed consent. And for many
people who have the health literacy and who are otherwise cognitively able to
understand what they’re being asked either orally or in writing, that’s not the
problem. The problem is somebody in a white coat who stands between them and
really bad health is requesting that they do something or recommending that
they do something, and for many patients they’re not going to say no.

And so my question is have you factored into your educational program some
focus on the nature of the timing when the informed consent is being sought,
and who’s doing it, and the sort of the feeling of powerlessness that many
patients and potential research subjects have.

MS. BRACH: Thank you for both the comment and the question. I do want to
actually respond to the comment as well, and that you’re raising the issue of
refusal which I think is terrific and one of the things that we have talked
about, and I can’t remember whether it was at Howard Dickler’s AAMC meeting or
another forum where we talked about, you know, what kind of matrix can you use
to figure out if this process is successful.

And one of the things that we said is if you have a 100 percent sign up
rate, then that actually should be a red flag that something’s not quite right.
So I think you’re quite correct to underscore that the refusal is a very
important thing. And in fact, if I had to choose one of the eight elements that
OHRP would say are all equally essential or whatever, the voluntary nature is
the most critical.

Earlier when I said we at AHRQ were able to grab the low hanging fruit, we
primarily fund Health Services research. So often and what this toolkit is
primarily targeting is people who are promulgating surveys, fielding surveys
and using medical records for Health Services research.

So it is a very different situation then. I heard a witness testify at the
SACHRP meeting which is the Secretary’s advisory committee on human rights
protection who is currently a member of the UCLA IRB but who 12 years ago was
the mother of a child with a brain tumor and who was approached to participate
in a clinical trial and spoke very eloquently about that process. You know, she
was a high school teacher, but her health literacy plummets, her son has just
been diagnosed, and here this person in the white coat who she’s hoping will
save her child’s life is asking her to participate in a trial.

And I think one of the important factors that we talk about is no time
limits in obtaining consent that that kind of research you need to let them go
home, talk with other people. You can’t be closing the deal here, you know,
okay, read it, ask me any questions you want, we’ll talk about it, and then
you’ll sign. And so one of the things that she was strongly advocating also was
to have a kind of peer mentor that they have a system of volunteers of people
who have been through this process. In her case, it was parents of children who
had been enrolled in clinical trials to help guide and serve as an advocate for
that person who’s in a very vulnerable position.

Can we totally remove the power relationship that’s there? No, because in
fact she was faced with a situation which is if I don’t do something, my kid is
going to die of this disease. And so the question was how she was going to
proceed. There were some options. But basically she was probably going to do
the clinical trial because that was the kid’s best hope.

Now she would have wished that she had been better informed about some of
the – you know when they said there may be an IQ drop, she didn’t really
understand they were talking to 70 IQ level, you know, that kind of thing. So I
think that there are some protections that you can build in, but that you’re
not going to erase the power imbalance entirely.

MR. REYNOLDS: Okay, I didn’t see other hands. Oh, good. Yes ma’am, if you’ll
introduce, please.

MS. SOLOVEICHIK. I’m Rachel Soloveichik from the Bureau of Economic
Analysis. One thing I’m concerned about with these consent forms is sometimes
when you tell people like they’ll get a headache that the suggestion gives them
a headache. Maybe a rational person would not read too closely because –

MS. BRACH: Well, let me speak to that in a couple of ways. One is that in
fact I think you’re all probably familiar that people who get placebos often
develop some of the side effects that they are told that they might get, and so
there is the placebo effect of side effects.

But more to the point in this conversation, you know, particularly Bernie
Schwetz who’s about to retire director, I’m sorry to say at the Office of Human
Rights Protection, talks about having a laundry list of side effects to the
extent that it makes it meaningless, that you know, if you list everything
under the sun that could potentially theoretically happen, you’ve effectively
negated getting useful information that’s going to help you make the decision.

And so it is going to be very tough for the lawyers and the advocates and
the IRBs, et cetera to hammer out what is a reasonable list of side effects to
include, you know, balancing the probability that’s going to happen versus the
severity of what it is that happens.

And if it wasn’t so late, I’d tell you a funny story about a true and funny
story about a release that included the possible side effect of spontaneous
combustion.

MR. REYNOLDS: Michael?

DR. FITZMAURICE: I think that was funny even without the long story. Henry
Youngman tells about a doctor who says to a patient, here take this placebo and
if it doesn’t work, come back and I’ll give you one twice as strong.

AHRQ has a long history during the short life of the privacy rule including
proposing the limited data sets with agreements and working with states and the
American Hospital Association to bring it about. So we’re very cautious about
the conditions under which people get data, particularly for research.

Now I’m wondering, Cindy, many times our language that we use comes from
regulations. So we appear stilted, it’s because we’re trying to copy out that
language. Would you support having OHRP review informed consent language and
maybe trying to come up with a model based on as simple a language as they can
as a guide for researchers around the country.

MS. BRACH: Well, this is what I’ve been able to negotiate with OHRP, which
is they will not put their imprimature and say we bless this because it’s a
template, and whenever you take a template, you’re going to have to adapt it.
And what they’re concerned about is then an institution will use that as a
shield, and if there’s a compliance issue, they say, but we used your template
that you said was good, and so now you can’t come after us.

What they are willing to do is work with us and essentially say we have
reviewed this; we don’t find any problems with it; and in fact, we are
depending on that involvement from OHRP to at least get in the gates with IRBs
that our premise in engaging them early and often is that it’s a nonstarter
with IRBs if they feel that they’re going to get hammered by OHRP.

DR. FITZMAURICE: Let me pose it in a different way, then. Would it be
helpful if another agency prepared a simplistic informed consent and then
passed it by OHRP to say would this suffice, and if OHRP says yes, then it can
be put out, but not as something that you can hold up in a lawsuit, although if
it’s good, it ought to hold up in a lawsuit.

MS. BRACH: Well, and as I say, you know, just anything has to be tailored,
and the devil’s always in the details. So that when you add in the specifics,
you know, it should be fine, but there could be something that changes it that
makes it different.

I think that that would, there is no reason not to be doing that. OHRP, at
least under its current leadership, has been very supportive of this idea. And
in fact, we’ve formed a little HHS OHRP informed consent dialogue in the
department to kind of talk about these issues, share information across the
Department about what different approaches we’re taking. So that’s certainly
what I’m doing with the AHRQ template, and there’s no reason why the Department
couldn’t follow suit.

MR. REYNOLDS: And for the first time today, Steve Steindel.

DR. STEINDEL: No, I didn’t have a question, Harry.

MR. REYNOLDS: You had your hands up.

DR. STEINDEL: No, I didn’t. I was pointing to –

MR. REYNOLDS: Oh, okay. Let it be known that Steve was just pointing. He has
no question.

DR. STEINDEL: I have no question.

MR. REYNOLDS: Cindy, thank you.

MS. BRACH: You’re very welcome.

MR. REYNOLDS: And Mary Jo, a great job. Absolutely great job in summarizing.

DR. DEERING: It’s easy to be other people, you know. It’s harder to be
yourself.

MR. REYNOLDS: And all of you were fast.

DR. DEERING: Well, I just wanted to make some sort of observations here. I’m
so pleased that the Workgroup wanted to have this panel. I think it’s a tribute
to everybody that they recognized that this is genuinely a problem.

And to the extent that the Workgroup and ONC believe that the success of the
NHIN depends on the trust factor and to the extent that they believe that the
consumers and patients must feel confident about the use of their information,
then I think what this panel, even though it’s virtual participants have said
that then the Department would therefore feel an obligation to ensure that any
consent processes, and you notice processes that help build that trust not only
“exist,” but that they have taken the extra step to make sure that
they are effective and contribute to knowledgeable, voluntary informed consent.
And I think that the material presented here also makes it clear that we’re not
talking about just public education, and I do hope that as the Workgroup moves
forward, it will change the whole section that it talks about this under beyond
just public education to information education support, and we can come up with
a wordsmith later. But we’re not talking here just about public education or a
campaign to help people understand what’s going to happen, and so I think
that’s been very effective.

Agenda Item: Work Group Discussion

MR. REYNOLDS: Okay. Right now, we’re going to open the microphone since we
had that on our agenda. If anybody wanted to come up and make any comments.
Okay, seeing nobody moving, what we’re now going to do and I’ll turn it over to
Simon to lead the discussion is basically go around the table and kind of get
the Committee’s feelings on what we heard today, what it meant, whether it
added or detracted from where we thought we might end up, and with that, Simon,
which way you want to start.

DR. COHN: Maybe I just need to make a couple comments before we sort of jump
into this, and obviously I have my list, but I think everybody else does also.
But I do want to just note a couple things.

Number one is that tomorrow morning we are going to get into the, and I
think we have a new version sort of a second draft of what will probably be the
15th draft by the time we get done with our report, but sort of the
next draft of the sort of framework, report, beginning to put some
recommendations in. So we’ll sort of start that right at 8:30 tomorrow morning.
So we won’t start jumping into that this afternoon. I think most of us have had
enough testimony that it probably makes sense not to do that.

Now another piece, and this is just sort of once again to alert you, I think
as you all know, you received a list of possible meeting and conference calls
for between now effectively and the beginning of November. I notice that some
people have responded; some people have lines crossed out; some people didn’t
even respond; and I’ve actually asked you all to sort of take a second look at
this, and of course Mark Overhage isn’t here right now since he’s one of those
that didn’t appear to even respond. I don’t see Mary Jo Deering on this one; I
don’t see Mike. I didn’t see Mike Fitzmaurice’s input. I think that there’s
something where if slight time changes would accommodate people better, I think
we need to – I mean, the recognition here is that if we have this meeting
today and tomorrow, if we have a conference call later in September which is
already scheduled, we have a full meeting of everyone in late September, and
then a day or a day and a half or whatever we’re doing in October, I don’t
really judge that that’s going to be enough getting from where we are now to
sort of final recommendations.

And so this is an opportunity hopefully for us to have some time to talk and
sort of move through various periods. Steve?

DR. VIGILANTE: We have a full meeting in late September?

DR. COHN: We have a full NCVHS meeting, not a full meeting of this group
separate. So I’m just sort of saying is I want everybody to sort of look
through these things. If there are things on these days that are not listed at
these times but are better suggestions, I think we’re happy to take them under
consideration. So I just want to alert you to that. I do notice that for
whatever reason there’s sort of lines out on a number of Kevin Vigilante’s
times, so he should relook at them. Paul Tang, we will have to communicate with
by email. But as I said, what we’d like to do is get maximum participation if
possible. So I just wanted to alert you to this one because obviously the
sooner we get this in place, and obviously I’d like to be able to announce
these by the end of tomorrow, the happier I think we will all be.

DR. W. SCANLON: There was one message with three dates on it, right?

DR. COHN: I think there was one message with four. Three dates of which one
dates had two different times on it. Yes. So anyway, I just wanted to alert you
to that.

Now what I do want to do, as I said, is to really give just people
opportunity, and I said I’ll make my comments I think after others maybe have
jumped in. But I think what we want to do is to try to capture while it’s still
fresh in your minds about any of your learnings, any comments that you think
fall into things that maybe help with our framing or maybe help with our
observations or recommendations at this point or any, and if anything rose to
the level of an epiphany, we’d also be happy to take those under consideration
knowing that we’re taking notes as we go.

And Kevin, since you look in a thoughtful mood, do you have some comments to
make?

DR. VIGILANTE: You know, I’ve got to say that I don’t know that my thinking
has changed very much. Maybe that’s why I think that what I heard today, just
looking back over the speakers, although I was struck how Monica Jones made it
seem so much easier in the U.K. I was struck by that.

And I was also struck by the way they’re sort of very comfortable with the
use of the word secondary without any apologies for it, and even so I thought
that that was interesting.

DR. DEERING: If Monica were here, she actually would have corrected herself.
I raised that with her, and she said specifically that it’s in their title, but
they are not comfortable with it to the best of my knowledge. But, again, she
can speak for herself maybe at dinner or later.

DR. VIGILANTE: You know, it did make me think, though, because she did
allude to that. She said, well, it’s secondary. She did say that, you know,
it’s almost more important sometimes, and it made me think that really it’s by
using and I don’t want to get into a semantic debate, but by using the word
secondary, the point is not to describe its importance. It’s to describe it
relative to the primary intent of the original donor of the information. That
when the information was, when the possessor releases or relinquishes it or
donates it to the recipient, the primary intent of the original donor, the
patient, was for use and care, and that other uses are secondary to that
intent, they may be as important or more important than the primary purpose.
It’s not a comment on importance, but on intentionality.

I enjoyed the conflict with Sean and his opponent from the pharmaceutical
industry. That was interesting. And I thought that the testimony about the
presence of these database, the AMIS databases that of the PBMs was rather
chilling, and I think that together with the testimony about re-identification
really showed frankly the ease with which we can be identified and this data
can be used in ways that it was originally not intended.

So that brings me back to our original observations that transparency at the
point of collection where the donor is clear about how this data may ultimately
be used becomes all the more important. And certainly the way that’s
communicated in a transparent as we’ve heard towards the end of the day is all
important.

So it’s been sort of a scatter shot of impressions, but I think it was more
confirmatory rather than new epiphanies about where we’re going.

DR. COHN: Michael?

DR. FITZMAURICE: I guess a couple. You mentioned the conflict with Sean
Flynn. I couldn’t help but think is he representing physicians who are willing
to give up the paid lunches, speaking engagements and trips. If so, just say no
is probably a good start as opposed to advocating that the states regulate data
mining.

Data mining isn’t the enemy, but I guess what you do with it could be the
enemy. And so it comes to another point, and that is it’s hard to read the
minds of the public. Now when it’s hard to read the minds of the public, the
answer is give them a choice, opt in, opt out. But this can harm public health,
it can harm research. So you anonymize, and pseudonomyze, and try to prevent as
much harm as possible.

But I don’t think I know yet where the line is between what the public would
be willing to support and what the public wouldn’t be willing to support. And
in the United States, we tend let’s go until somebody screams. And I don’t
know, I’m not advocating that as such. But I want a greater wing of benefits in
the public sector so they can see what the advantage of this data mining and
the data uniformity is separate from the NHIN.

And in my mind, finally, my mind gets back to what does ONC think that it
needs some kind of judgment about is it okay to use health data for other
purposes for which it was originally collected, maybe quality measures,
although some is collected directly for quality measures, paying for
performance. And also what is an appropriate sale of data to pass the REOs. I
have a problem with that. I would want to get more specific about what are
appropriate revenue producing activities with data that would be acceptable to
the public and under what conditions.

So I agree with Simon that it’s not quite, we’re not quite there yet, and
we’ll need more time to lull this over.

DR. COHN: Mike, thank you for your participation. Welcome back.

DR. FITZMAURICE: Well, thank you. It was good to be away, and I was on
conference calls to my cell phone and rang out in the car. I just love this
stuff.

DR. R. SCANLON: I think the day was both reinforcing of kind of I think from
my mind the dimensions of what we’re dealing with, but also it was helpful in
terms of maybe sort of new practical information that we can keep in mind sort
of as we’re trying to deal with the bigger principles.

The discussion of New Hampshire brought for me sort of the issue of it’s not
just the use of the data, it’s who the user is because the user potentially can
have secondary, in a different sense, secondary uses that we may have problems
with, and that that creates a different situation.

Just as we go around the room at the beginning of a meeting talking about
conflicts of interest, there’s potential conflicts of interest. While someone
may be using something in a post-marketing surveillance sense to detect safety
problems, they can also be using it in a marketing sense which is a very
different sort of context. And I think that’s something that we need to keep in
mind.

The issue and I think a lot of our focus has been on this issue of
permission. You know, what’s the role of the patient saying my data should be
used or can be used for sort of the following purposes. I’d like to sort also
put on the table the issue of compulsion, and it may be, to take a phrase that
we heard today, it’s to break the glass from the social perspective. It’s to
say it’s important enough that we want your data.

And you know, we talked about it in the public health sense. I guess there’s
a question of sort of from a research perspective, from other perspectives, are
there any things that rise to sort of that level where we say you really need
to participate from a social perspective.

The other thing, Mark, you said earlier that we try to balance the privacy
of patients. I’d also like to put on the table balancing sort of issues of the
protections, not necessarily privacy, but protection of providers because just
we talked about an individual could be harmed because their information was
released and maybe used appropriately or maybe misused, but their employment
was affected, their life was affected, the same can happen for providers. I
mean, we’ve got an issue now that’s in this quality measurement, and we can
talk about it tomorrow, in our draft there’s an issue of what’s being done in
terms of quality measurement. What if the quality measurement is bad? Okay, I
mean, flawed measures, perfectly valid data but flawed measures. What are the
consequences there? Do we have to be thinking about it from the perspective of
protecting sort of providers legitimately as well.

And I think I have a long enough history of being hard on providers in other
context, but I can defend them sort of in this one, and that it’s not I’m soft
on them over here. They’re representative. But it’s just this idea that these
data are going to be powerful tools, sort of once we start to get them flowing
and we start to use them. And so the question is we’ve got to be very careful
about how we sort of allow that power to be used.

The U.K. discussion, I thought, was interesting. I think one from the
perspective of how easier some things seemed, and it’s kind of monopolies
sometimes have easier times, okay, even though they’re not quite a total
monopoly, but they’re pretty far along the way.

But a lot of it related, I think, to the user in many of their context which
was they’re users themselves, different branches of themselves, and that I
think affects things.

But then I also thought in response about sharing data with the drug
companies, there’s just a whole different sort of perspective there that we
wouldn’t share, and probably we wouldn’t share it because our situation is
different. I mean, they may be kind to the drug companies in sharing data, but
they’re incredibly tough in terms of prices and formularies. So it’s kind of
like what does it matter sort of after you’ve done this other if you’ve
exercised this kind of control.

So I think it was an interesting thing. But the question is how much can we
sort of apply in terms of our experience.

DR. COHN: We’ll remember this being the day that you actually protected the
provider, so thank you. Steve?

DR. STEINDEL: I didn’t ask any questions, and I essentially Kevin and Bill
said what I was going to say. Do I have to say anything? No, but I think that
pretty much sums what I heard today. I heard a tremendous amount of very good
detail that augments what we heard and in some cases supports what we’re
thinking from the past sessions, and I thought there was a lot of learning, a
lot of education as was pointed out by Kevin and Bill that I think is going to
prove some very, provide some very interesting reinforcement in our discussions
and in the document itself.

The big thing that I came away with was Latanya Sweeney who I’ve heard give
her talk before, so I was very familiar with what she was going to say, I
thought really drove home the concept of risk benefit when we talk about this
whole situation of secondary use of data. That you know, the basic I think take
home message from her talk and from Richard Dick’s talk and from some of the
other things that were said was most of that information is about you some
place, somehow and they can tie it to you no matter what you do about it.

And if going into it with that principle, then we’d have to start talking
about the risk benefit. If we’re moving this data into secondary purposes, what
is the risk benefit for doing it. And what reasonable level can we put on the
data so that the world won’t know who you are when they look at these
anonymized data sets or something like that. That was something I thought about
when we first started this panel and I think was driven home a lot during
today’s session.

DR. COHN: Steve, for having nothing to say, you actually had a fair amount
to say. So thank you. Mark Overhage. Marc, first of all, we would like to know
your availability for meetings in September and October. So that’s number one.
Number two is what we’re asking people is for any epiphanies, thoughts,
learnings.

DR. OVERHAGE: Befuddlement.

DR. COHN: Befuddlement, okay.

DR. OVERHAGE: I guess I continue to struggle with finding clarity, and I
mean I haven’t heard much new. But I haven’t seen a shining path either emerge
out of it. And so I’m looking to all of the bright minds around the table for
guidance.

DR. COHN: Mark Rothstein.

MR. ROTHSTEIN: Well, I thought today was extremely interesting and very
informative, high quality presentations. I want to thank Margaret and the
others who arranged it. You guys did a great job.

I’m not sure I have any new insights that are going to be directly
translated into new approaches to dealing with the issue, although I’m not sure
I agree with the view that we’re terribly far away at one level, and that is at
the sort of the broadest conceptual level. I mean, we’ve got a lot of work to
do in terms of the details. But we’ve heard very powerful testimony, not just
today, but at the two prior hearings. And I think in a general sense, just from
informal conversations as well as questions that we’re all in sort of the same
wavelength now. But how that translates into recommendations remains to be
seen.

But I’m optimistic that it’s not going to be hours and hours of sort of
hashing it out. Why? Because I’m comparing this to the privacy and
confidentiality, and everything looks brighter than that. And just on Bill’s
point about protecting providers, I mean, I agree with you that that’s
something that needs to be considered. But I’m not sure that I agree that the
way to protect providers is by limiting the disclosure of health information
about patients.

I mean, patients have an intrinsic and a consequential interest in the
information. They’re concerned about what happens as a result of the
disclosure. They might lose their jobs or insurance and so forth. But they’re
also subject to some level of personal embarrassment, stigma and so forth if
information is disclosed. Physicians only have the former, that is, sort of the
tangible harm that could happen to their practice and so forth, and I’m not
sure that we can’t protect that by other means. In other words, limiting what
these other bodies can do with the information that would harm physicians, in
other words, be dropped from a panel, suffer some loss of privileges or
whatever as a result of that.

But relatively speaking, I agree with your point.

DR. W. SCANLON: I actually think other means might be the appropriate way,
and the issue is, though, we should not be silent, that as we move forward
because if we really do move forward in terms of making this happen, it seems
to me there’s a lot more power out there in data, and that we need to make sure
that there is appropriate protections for all the parties involved.

MR. REYNOLDS: Gee, I guess a couple things. I think first we probably almost
have two messages, one that we base on today’s framework, and then we heard a
lot of stuff about where the future could be on some of these tools and some of
these consents.

And if you took dbMotion, for example, and being an implementer, if you took
dbMotion and an incredibly rich system on an incredible amount of business
decisions and other things would have to be made to ever get there. So I
commend people that are there, but that’s a big deal to get through some of
that that’s going on. So I think we have to make sure we kind of paint a
journey because this is more of a journey, not a solution, I mean what we’re
doing here.

So I think as we keep that in mind and build off of what we have. Obviously,
it was interesting to hear the central model from the U.K. because that’s kind
of where we were talking about a trust agent, whether people actually trust it
or not, but at least it’s a single point that it can be have governance around
it, and you can talk about that governance. And if you know that governance,
you could be maybe a little more cavalier like some of the other comments were
made as to what you can and can’t do, and what you’re doing don’t rule, and
we’re much more of a – she used a wild west. We’re much more of an
individualistic, especially as we talk about longitudinal records for somebody
because they’re seeing multiple doctors that have multiple disparate thoughts
and multiple consents and multiple other things going on at the same time
there’s a whole other ecosystem feeding off of that for information and so on.

So I think that’s something we have to absolutely, I like the idea of the
risk benefit. I think that was a good comment that was made. I think as we go
through the deliberations, the idea of win/lose will quickly derail maybe a
framework. So as we’re thinking about a framework, I think maybe we need to put
win/lose against it as a filter. Well, I mean, we always hear the doctor loses,
or the patient wins, or this person or that person or research or something
else. I think we need to talk about what we want to make sure as we’re doing
it, we probably have the courage to talk about a framework and then the resolve
to go through and put filters against it to make that what we would be talking
and doing would actually make sure that nobody was inadvertently put in a
situation that we wouldn’t them put in as we do it.

So I think that’s, because if this was easy, we wouldn’t have had to do it
and a lot of this. And I think the whole common good continues to be a key
thought. And then I loved the comments on simplified language because that’s
pretty sobering, the list of – sobering, well, I borrowed it from you. I
borrowed it from you, but I knew if I said it, you wouldn’t understand it. So
we’ll go from there.

But I think the thing about that is that we really have to realize that the
people that we’re really trying to get consent from are on that chart, not the
testifiers, not us sitting around the table. It’s really that chart, and we’ve
got to really keep that in mind. So thank you.

DR. COHN: I was going to go for a minute and then let Justine, if that’s
okay. Are you okay on that?

DR. CARR: I actually get the final word?

DR. COHN: No.

MR. REYNOLDS: Mary Jo gets the final word. You and the three people you
represented.

DR. COHN: First of all, I really want to thank Harry for running the
meeting. It’s actually fun to be a participant and scribble notes furiously.
Though being the physician that I am, of course trying to read them afterwards
is always an interesting experience.

You know, there were just a couple of observations and things, and some of
them maybe are new, some of them are probably just things that I’ve been
thinking about for the last while. The first piece I want to talk about is data
stewardship which I was in some ways hoping that we would have more
illumination of thought based on the work going on, and I’m sort of left to
feel that we’re going to have to sort to put this together ourselves, at least
for this report.

You know, from my view and I’m sort of thinking about data stewardship,
first of all I would say that data stewardship and fair information practices
seem to be sort of kins or cousins anyway. I mean, especially if you’re talking
about principles of data stewardship. Principles of data stewardship and fair
information practices are all sort of the same thing.

And certainly the piece that I was sort of, you know, as I thought about it,
I think that there’s actually data stewardship in many ways sort of permeates
the health care system. What I think we’re noticing is that there are certain
places where we feel that there’s less data stewardship or uncertain data
stewardship or uncertain rules around data stewardship, and this gets to be
that intentionality and going out into the third ring issue.

Now and at least that’s what I think. Because I mean, HIPAA in many ways is
a text around data stewardship if you think about it. Now in all of this, and I
know this is beginning to be in our draft that we’ll look at tomorrow. But I’m
maybe sort of wondering, you know, knowing that the end answer of course is
getting better privacy protections, Secretary supports that cover all entities,
making us look a little more like a comprehensive approach. But I am wondering
and I just sort of put it on the table that there may be some things that
business associate agreements, especially of strength and better monitored and
all this stuff about whether or not that can begin to close a lot of the gaps
that we’re talking about and then indeed in an environment where people can’t
quite figure out what they really want to do around data stewardships, business
associate agreements properly done, properly monitored, and I don’t have the
answer. I just ask the question knowing that I’m not a lawyer. But how far that
can really get us in terms of tightening things up in a world in which we exist
now. So I just want to put that on the table which I’m sure we’ll talk about
tomorrow.

Now in terms of transparency, obviously we heard a lot about education. We
heard about understandable forms which of course would be, you know, we have
not done very well either for financial or for healthcare recently. But I
actually am wondering also just to put on the table the role of audit trails,
and I think we heard that from Richard. We heard that from others, from
dbMotion that there may be some role in there to help once again provide
increased trust for people. Now I also was keeping my eye open for things
having to do with minimizing risk, and I guess I’m wondering about tools,
approaches, other things like that to help decrease these issues. And I guess
I’m wondering about certification of de-identification, knowing that that may
still be sort of, I mean, given all the capabilities to link everything
together and even that may not be sufficient. But that might actually be, there
may be something around that that might help us or make us at least feel a
little more comfortable in terms of all of this.

I’m also wondering whether, I think we heard examples of places that
actually hold data as opposed to places that give. I mean, you could whatever
it is, I mean, it’s that issue if you’re controlling the data and what you’re
doing is getting queries and then giving responses, it obviously is a model
where there’s much less risk involved around the data. And I’m just wondering
if there’s something there that might – okay, well, right, and I guess
what I’m saying is that when you think about it, the models and once again I’m
thinking about what Latanya was sort of saying in all of this. She was talking
about, well, gee, you start de-identifying, you make things available, and
suddenly I’m linking things 12 different ways. And, of course, one of the
solutions to that is, well, you send me your query, I’ll run the query and I
will give you the result. But I’m not actually releasing the data out to
anyone, and therefore I can’t, nobody can link it. And so it decreases the
risk, and we saw that in Blue Cross when they were talking about things. I
mean, you’re right. I’m only suggesting that this may be a tool of some sort or
another, and you’re right, I mean, it is an interesting – well, that’s
right. I guess we’d have to retract our letter from before.

I just sort of bring it up as one tool in minimized risk. Now it’s probably
not the only solution, but it’s just one piece.

Now I still don’t know what to say about consent and the rule of consent
except that I think we will probably have some mold here as we go further. And
I think that that – oh, I actually thought that the Massachusetts
discussion was actually very interesting just because the rationale that they
provided around the issue of intentionality about how things were different,
and I know sometimes we struggle with where dividing lines are. I noticed
sometimes when Mark Rothstein talks about NHIN, I don’t know exactly where the
NHIN starts or ends. But I sure don’t know where the cleavage is between, I
mean, is a REO part of an NHIN or an HIE part of an NHIN or exactly where their
cleavage is, and I thought they provide some helpful concepts around all of
that. So anyway, those were my notes, and obviously some of them elicited some
questions as well as consternations. But that’s why you bring them up so you
can sort of throw them out and see what’s happening. Justine?

DR. CARR: Thank you, Simon. I also want to thank Harry and Simon for being
good teachers of how you co-chair and chair, and also for running the meeting.

And also I want to echo what everybody else said about the work that
Margaret and Erin and Mary Jo and Debbie have done in getting these expert
testimony, and I don’t want to forget Cynthia and Jeannine for all their good
work.

What struck me is that our initial meeting and maybe our second set of
meetings, we had so much testimony on benefit, and today was all about risk, it
seems. So it really kind of changed things a bit.

One thing that impressed me was the risk of re-identification with or
without HIPAA limits. I think we’ve had this sense of security around HIPAA,
and we were pretty clearly shown today that it’s not going to begin and end
with those HIPAA limits.

A second is this, what impressed about Latanya is that it was an actual
quantification of risk. It appeals to the scientist in me, that is, something
we can really measure against.

Another risk that struck me is the risk of state legislative initiatives
that may not have had benefit of all the testimony that we’ve heard. I mean,
we’ve heard substantial testimony and are struggling with it. And yet, there’s
a lot of legislation that’s being proposed. And I worry about the asynchrony of
that and also about the balance of that with what’s out there.

Another topic that struck me was the privacy, sort of the juxtaposition of
protection. Stewardship on the one hand which is a concept that we can all
agree with, and then the privacy protection solutions we’ve heard today were so
technical. And yet, we ended up hearing that 36 percent of the population
doesn’t even have basic health literacy.

And so how you sign the thing, can you give it to CVS, can you give it to
Engenics, MedicAlert but not, I mean, that is a bit of a cognitive dissonance
for me of how we’re going to take that same population and help them make an
informed decision about that.

I continue to be concerned about unintended consequences of these solutions
on the practice of the doctor/patient encounter. It just is hard for me to
imagine because I’m probably going to be in that group of people that are old
fashioned doctors, old school, you know, and I’m sure all these computer geek
kids will come along as doctors, and they won’t understand how medicine used to
be practiced. But it’s hard for me to visualize that all of this technology
happens in the background, and things are masked and hidden and appear and
reappear in a just in time fashion and that relationship endures and is timely
and properly informed.

And then the final question is very concrete is a centralized database a
good thing or a bad thing, and is it even feasible. You know, we really heard
absolute statements it’s the best thing, and it’s the worse thing. And I guess
I don’t know the answer to that. So those are my thoughts.

DR. COHN: Margaret?

MS. AMATAYAKUL: I guess a few things that struck me probably very apparent,
nothing is easy. We’re not alone, but we are different. And I think that
comments, the quote that Richard Dick had from the Forrester Group, it’s only
going to worse. It’s something we really have to take seriously. The situation
of privacy issues is going to get worse.

The other thing that Latanya Sweeney commented on, she had two things in
addition to what you all have said. When she commented that something was not
covered by HIPAA, we all sort of went gasp and thought that was completely
wrong. It occurred to me that I think what she was doing was following the data
rather than following the entities. And I think we’ve put a lot of emphasis on
the entities. But very early in the first testimony, we heard, I think, ONC in
particular talk about follow the data. And that’s hard because it’s sort of not
what we’re used to doing. So I think we probably should keep that in mind.

The other thing that struck me was we’re all convinced that the HIPAA
de-identification process removes 18 data elements, and it really doesn’t. It
removes 17 data elements plus any other data elements that might cause you to
be able to identify the person, and that last element would enable us to
probably fill that 0.04 gap if we really paid attention to it, but nobody does.
Everybody just takes off the data elements and goes from there.

So something to think about, I thought, and Steve says no. But I mean,
that’s really anonymization.

DR. STEINDEL: No, we’re not saying no to what you said. We’re saying no to,
we don’t necessarily want to fill that gap, you know, because we can produce
the best set of de-identified HIPAA data and just send a totally random data
element number that doesn’t relate to anything, that removes all data elements
that could possibly identify anything concerning the record but is totally
useless. But it meets the requirement of 18.

MS. AMATAYAKUL: Right. So that’s the risk equation. The other thing, two
other things. I thought Sandman’s comments were very counter intuitive to what
we typically think of, and I think we need to really read those comments and
think about them. And actually two other things. Jonathan White had a little
side bar conversation, and he mentioned during the formal testimony, but if
we’re looking to have this national data stewardship entity which doesn’t exist
and was not the intent, and he mentioned this. It was not the intent of the RFI
to create that, but to judge get information. It sounded like, once again, we
have lots of people for and against, but whatever, it’s going to have to be a
pristine group of people, and that could be really tough to find that.

And then the last one was Cindy Brach, I think, made the comment that no
matter how literate we may be, and we may be at the top of the literate heap,
so to speak, when we’re faced with pain or a child’s illness or whatever,
literacy plummets. And I think we have to remember that, that it’s not just for
half the population or 75 percent of the population. It’s for all of us.

MR. REYNOLDS: Marc?

DR. OVERHAGE: I want to add one thing. I’m still befuddled. But there was
one thing that some of these comments triggered that I don’t think at the
beginning of this journey I have in my mind, but I think increasingly it’s true
or increasingly thinks so. And that is that I’m increasingly thinking that
trying to drive to individual patient awareness and consent and control is
insane, that we will lose. And the comments made just along the last few really
reinforce that.

There’s a very complicated risk benefit trade off with a lot of potential
benefits, a lot of potential risks. State legislators who don’t have the
benefit of this kind of depth and time and energy making silly decisions, it’s
a really – and patients’ literacy, the inability to understand, interpret
and put into context these decisions at a moment. And as you said, obviously
it’s a dynamic thing for patients. I mean, your utilities change when you’re
sitting having coffee at Starbucks versus when you’re in an emergency room not
able to breathe. You know, they’re all different.

I’m beginning to feel like the only pathway out of this morass is going to
be a societal decision about the levels and appropriateness of use, that trade
off, and there are going to be some people who think it’s really crazy and
they’re going to go live in a compound in Utah, testifiers, no. But they’re
going to go, and they’re going to make the choice that way.

But as a society, I think this is the kind of thing that we – and it’s
not being paternalistic and trying to protect people so much as saying, you
know, this is a really tough, difficult, complicated thing that if we let
everybody make their individual decision, they’re going to make them in
inconsistent ways and so on. And I know it’s a tough place to go, but –

DR. COHN: Marc, you’ve woken everybody up, and we have the other Mark, and
Harry will have comments, and then we’ll go back into order.

MR. ROTHSTEIN: I just want to comment briefly on what Marc O just said, and
even assuming that you’re right, I don’t think it necessarily follows that what
we should produce is something that is really prescriptive in terms of in this
situation this goes, and in that situation that goes because what we’re doing
is creating a political document basically, and I think politically it may well
be a non-starter to come up with a document that is seemingly paternalistic and
removes the choice from individuals, even assuming all of what you said is
right, which we might question.

So I just wanted to raise that point.

MR. REYNOLDS: One thing I had forgotten to say earlier was I really think
more and more after listening to Latanya and others today us really drilling
down on some clean definitions because the rhetoric where people start using
scrub, de-identified and some of this other stuff just absolutely blows the
whole discussion clear out of the water every time. So I think we need to
really narrow that down.

I think second in comment to one of Margaret’s statements, I think the
reason a lot of us are still using entity as kind of one of the prime drivers
is following the data, being an IT person, following the data is a long and
arduous journey that even when you find it all and you make the whole journey,
you still have to go back and figure out how to get somebody to protect it or
deal with it.

So I agree, and that’s why I was using that term filter earlier. I think
that once we would come up with whatever we would come up with, making sure
whether it’s the flow of the data, whether it’s the types of uses, whether it’s
this or that is put over that to make sure we haven’t missed something would be
probably an approach. So that’s a different sense of that one.

DR. COHN: Before I give it to Debbie, I would just sort of follow on to sort
of comment that if you just think about it, one of the HIPAA pieces obviously
is that there’s exceptions for state reporting. And I think that’s what she was
talking about. So once again, it’s covered by HIPAA, but that’s how it’s
covered. And I think we just as we fill in all these circles, we need to
remember that there’s all of this stuff even without extensive arduously
following the data.

MR. REYNOLDS: Right.

DR. COHN: Debbie.

MS. JACKSON: Just the important role of communication, this panel today
really put a context for me from the U.K. to almost the sociological standpoint
from where Latanya was coming from. It really helped me get my head out of my
usual box and realize that the health data and the de-identification has some
relevance in comparison to what’s going on in the de-identifying in a mob. I
mean, the things that she really put in the description helped just put things
into perspective.

And this panel, as someone mentioned, risk versus more benefit, it just
helped make a whole comprehensive set for me.

DR. COHN: Mary Jo.

DR. DEERING: I’ll start with one very minor, just sort of – it was that
question to the Workgroup that I was going to ask. But it’s very small, and we
don’t need to dwell on it.

In the CA tissue functional requirements that I shared with you, they treat
the consent as PHI. And so I just thought that raises an interesting concept
because it’s written, it’s got the name and the signature on it and the date
and what they’re consenting to. And so I just waned to –

DR. COHN: Say that again.

DR. DEERING: The consent itself becomes PHI.

MR. REYNOLDS: What does that mean?

DR. DEERING: I don’t know what it means.

MR. REYNOLDS: Okay, so that makes two of us now, so help me.

DR. DEERING: Okay, I can set that aside. I can set that aside. One of the
things that I think Monica Jones didn’t go into as much detail in as I expected
her to because it’s come out in a couple of conversations that we had and I
hope I’m going to represent it carefully. First she starts with this idea of
data quality. But she in some conversations built on that, and I think it came
through in some of her comments where it seemed that she was talking almost
about streamlining the actual fields that are collected. So that she’s moving
away from the sense that it’s your whole health record that we’re interested in
as opposed to – I didn’t realize that there were so many commissioning
data sets and that you can nominate a new one and with enough justification,
you get a new data set.

So, again, from the point of view of what the data it is, what is the it
that you’re talking about when you’re talking about data, it sounds like they
are in fact taking a much more organized approach. And it might be interesting
to probe that. In other words, do they think they need just a dump, and in fact
is that what they’re aiming for. So I’ll just leave that open.

Another thing that I was interested in, and it gets to a little bit of this
issue of primary versus secondary. And we’ve heard that consent at a given
period of time like at the outset for multiple uses is possible. So if you talk
about primary and secondary about the intentionality of the person who’s giving
information, if at the time they say yes, they’re consenting to multiple uses,
then what’s secondary.

So I’m just saying that it – just as you thought it was safe to go back
in the water of primary versus secondary, it seems that – well, in which
case, all those uses are primary. But anyway, it is possible to consent in
advance. It’s technically possible, and I think we’ve heard that from a
communications point of view, it is possible. So I think that that’s
interesting.

I wanted to pick up on – I was, as you could tell, very concerned about
this AMIS database and the fact that it’s got everything, and that in fact it
is already being used for exactly probably one of the greatest fears, one of
the two greatest fears of people, you know, my employer’s going to have it, my
insurer’s going to have it.

Yes, your insurer has everything about you already. So we don’t –
either we solve that question, or we’re not in the loop to solve that question,
and it shouldn’t be a burden on the NHIN to solve it.

DR. DICK: It’s with explicit permission. Yes, that’s with explicit patient
permission.

DR. DEERING: Right, but it’s coerced permission, right. Compelled, and my
question is they have a business, so they have it under coerced permission.
They clearly have a business deal to sell PHI, don’t they? Okay, they have the
data with permission or not. They have a business arrangement to give it to
you. Are they free to enter into business arrangements to give that personally
identifiable information to anybody else.

DR. COHN: Mary Jo, I’m sorry. You’re making these as statements. It sounds
to me like what you’re saying is just to calm things down a little bit, it
sounds to me like what you’re saying is that we ought to investigate what the
legal relationships are, what this is all about. I think that’s what you’re
saying. Am I right?

DR. DEERING: Well, I think it is because if indeed such a database is being
used for other purposes, and, again, I’m not saying that they may not be good
purposes.

DR. DICK: Those AMIS systems, that’s an acronym for Archived Medications
Information System, okay. There are actually two sources of that data today.
One is from Milliman, and the other is from Engenics, okay. Both companies
compete in that space. Just to give you a parameter, kind of a data point,
probably most of you do not know how much a medical record is worth in the
underwriting process. A typical underwriter at New York Life or any of them you
want to name will pay $55 to $75 per copy of your medical record to get it to
do the underwriting. It has huge value and with all the risks that are out
there, they go after medical records, 1.2 per applicant, and they are many,
many millions of them, okay.

I can just tell you one company has dedicated to them a 727 every night of
the world loaded to the gills with copies of medical records arrived in Dallas,
Fort Worth, Texas, okay. Those records are going to various underwriters for
the purpose of underwriting, and 1.2 records on average are retrieved per
applicant per insurance. That’s life insurance.

Long term care insurance, they’ll go after every record. It’s something I
learned a lot more about than I ever wanted to learn. But the value
proposition, this AMIS system presents and in fact I shared with Kevin, I
believe, that Milliman did a study with the data, and they showed the
underwriter that 32 percent of the time this pharma data told the underwriter
about physicians you have been seeing that were not divulged in the application
for insurance. It has some pretty interesting implications in terms of value
proposition to them.

As I said, I was in the highway construction business building the highway
to the med studies, but building it on the backs of the life insurance
companies. Now these AMIS systems are sitting in the PBMs and, as I said I
think off line, what is it if your data is in ten systems in that PBM’s data
center, what is it if it’s in 11, it still enjoys the same electronic and other
privacy capabilities that that PBM has in their data center. That data will not
be accessed and cannot be accessed without actual presentation of a signed
consent, okay, or in the case of the insurance companies, quite frankly they’re
doing it on a case-by-case demand. They hold the authorization will produce it
whenever they need to because they all have them whenever you apply for
insurance.

So I’m just saying the data is sitting there. It is only accessed with
consent and what You Take Control is trying to do is open up that with consent
to save lives and use it for what I would call more appropriate purposes than
just the underwriting.

I said it was tantamount to literally building the interstate highway system
and then saying only taxicabs could ride on it. How stupid is that? And so the
idea is that the data is sitting there. It is being used for purposes other
than what I certainly intended. But it is legitimate in that, yes, it’s in a
system that’s owned by Engenics in this case, and it is producing substantial
revenues to both the PBMs and to Engenics. These underwriters are paying very
handsomely for access to that data, and we simply believe that it’s got to be
used for a whole lot more than that. So I hope that helps, okay.

MR. REYNOLDS: I think one of the things that we’re making a lot of what I
call class action comments. We’re saying all insurers, all providers, all this,
all that, and I think there are many levels within there. So, you know, some
people are covered entities. Things are allowed to happen. Others aren’t. So I
think a lot of our rhetoric just, we like a word.

And so I think as we’re trying to get to where we’re going, I think it’s
going to be important to make sure that we keep that in mind because otherwise
then it will just ratchet it up the rhetoric because class action comments are
hard to debate. They’re hard to defend, and they’re hard to do anything with.
So I think maybe a little more preciseness in what we’re talking about would be
key as we drive to the end of this.

DR. DEERING: I have two more small comments that aren’t inflammatory. The
first was just picking up on Justine’s comments about is a centralized data
bank good or bad. With all due respect, I might say that’s the wrong question.
I’d say the question is, is it needed. And I think too often we’re not thinking
of what do we need the data for and working backward from that.

We’re saying we’re going to collect all the data, and then we’ll try and
devise the systems. So I think that from a policy point of view, we should
write policies to serve the purposes and to think what is the need that we have
as opposed to good and bad in the abstract, and can you get the data elsewhere.
I mean, as in the British system, okay, we don’t need to get it from you there
because we’re going to get it from other places. So I think that might simplify
some things.

And then just the final point, and I really appreciated Harry’s point where
you were talking about people who really mattered. I do sometimes still hear
almost an assumption that the consumer patient is them, and a presumption that
they’re adversaries, that they’re always going to say no, they want to hold on
to every piece of data, and we’re never going to be able to reach them, and
their starting point is that they refuse to share.

I believe that those are flawed assumptions. I believe they’re not tested.
Moreover, I think Peter Sandman who has worked in high stakes industries about
this communication would say that you lose more by trying to impose controls or
withhold information and choice because it probably won’t work. It will
unravel. And the loss of trust, the backlash, the anger, the outrage factor, I
mean, he advises his high stakes client don’t’ go that route unless you
absolutely have to. I mean, at least examine your assumptions and how you want
to get there.

So I only would like to say that I think we’ve heard ample through this is
that you can communicate simply. It really isn’t rocket science. There’s a lot
of basics. You don’t have to give them all that detail. You have to give them
the essentials. They can understand. And usually they’ll say yes. They’re happy
to. They really want to share. They can see the value.

So let’s not start with the assumption that they’re bad, they’re greedy and
proprietary.

DR. DICK: Could I ask could we get copies of all the presentations, and will
those be made available?

MS. JACKSON: Yes, all the slides and material go up on our website on our
home page within about three weeks.

DR. DICK: Thank you.

DR. COHN: Well, Mary Jo, thank you very much. I think this has been very
useful. May I just add to this other thing through my own notes. I also, just
to throw one final piece in which is sort of reminding ourselves once again
about another tool which is pseudonymization. And I have no idea what use it
has beyond public health, but it might. And so we should throw that into our
tool box just to see if there’s ways that it might help.

Now I think the debriefing has been really interesting, and I think will
bode us well as we begin to move into looking at the documents tomorrow
morning. Obviously, much of tomorrow is really talking through sort of where we
are, our conceptualizations, moving from themes into observations and
recommendations, sort of seeing where we are. We do have, I think, two
testifiers tomorrow and that, at least unless we come up with some other
testifiers really sort of closes the testimony for this topic. And from there,
we move into, as I said, sessions where we’re trying to put this together. I’m
glad that many of you feel that you’ve heard enough. And certainly for a number
of you sort of saying, well, these are reconfirming my findings, Mark, I do
share your comment about befuddlement which I think is sort of all this will
begin to come to earth as we begin to get a little more concrete hopefully
about what we’re thinking about, and we need to filter it through our own
realities.

MR. ROTHSTEIN: Simon, there’s a difference between feeling that you’ve heard
enough and feeling that you can’t take any more.

DR. COHN: Well, with that and after eight and a half hours of meetings
today, what we will do is to adjourn until 8:30 tomorrow morning.

[Whereupon, at 5:37 p.m., the subcommittee meeting was concluded.]