Lack of Standardized Data Hurting Review

Email this to someonePrint this pageShare on LinkedIn0Tweet about this on TwitterShare on Google+0

This week’s blog posting is from Laura Wright, who attended the 23rd Annual DIA EDM conference in National Harbor, MD, February 16 through 19. In Laura’s role as Account Manager at GlobalSubmit, she talks to many people in regulatory, clinical and IT roles in the biopharmaceutical industry.

Hearing some of their interests and concerns led Laura to attend a session entitled “Regulatory Update: Future Directions”. Here, she offers her impressions.
Data standards – whew!! Nothing quite as exciting as data standards. Have you ever seen a group of people arguing about SDTM (Study Data Tabulation Model)? It can get pretty heated. If you think that is a joke, then you haven’t seen it.

I am not interested in joining that debate today. I just got back from DIA’s Annual US EDM conference and I would rather share some insights from the new Computational Science Center and the Office of Planning and Analysis at CDER. Turns out that some people at the FDA see the lack of truly standardized data streaming into their servers as a real problem and they are proposing real solutions.

Chuck Cooper and Marni Hall spoke at a regulatory session on Thursday that highlighted new directions at CDER and CBER. The data problem was summed up for the audience and it sounded something like this:

The reviewers are spending too much of their review time trying to manipulate incoming data so that they are able to analyze it. And they are spending more time manipulating than analyzing. Because of different data standards, the overall quality, consistency and transparency of the review suffers. And your NDA approval takes longer. Certainly not a new issue, but a problem nonetheless.

In many cases the data is consistent within the sponsor organization. SDTM has been around for more than 6 years and most sponsors really do want to make the reviewer’s life as easy as possible. But what does “standardized” mean to a therapeutic area that doesn’t have universal data sets available to them? What about all of the well-meaning statisticians who are submitting “SDTM-like” data? The FDA is accepting 5,000 eCTD submissions a month. If everyone is doing their own thing, that is a LOT of data to manipulate before actually reviewing it.

Not only is your drug approval taking longer (I know that you are still stuck on that) but the agency’s ability to retrieve important information about trials that have been conducted in the past is completely arrested. Picturing those handcuffs? They can’t retrieve legacy study information today to easily compare it to new data. There are no audit trails showing how the reviewer came up with the analysis they used last year or five years ago. That means more time is spent on finding answers that the reviewers should be able to have access to in order to help them make sense of the data in front of them today. This cycle seemed kind of hopeless.

Until Chuck and Marni told us about what their offices are doing to support reviewers. Between converting the legacy trial data from prior submitted Phase II studies to diving into that age-old debate to create new specs surrounding data standardization, the new Computational Science Center will be a busy place. You can help, too – your input is very valuable, whether you are a physician, statistician, a regulatory ops manager or a software vendor. There are forums and specialized conferences for you to weigh in. After all, who doesn’t want transparency, consistency and quality?

Author: GS

Share This Post On

Submit a Comment

%d bloggers like this: