FDA, Industry Collaborate On Genomics Guide
Last month, the FDA issued its long-awaited draft guidelines for pharmacogenomic data submission. Companies large and small have been anxious to get their hands on these guidelines, for they are a critical starting point in the agency’s efforts to come up with the most appropriate requirements for pharmacogenomic data as part of a new drug application (NDA), a biologics license application (BLA) or even an investigational new drug application (IND).
The FDA and biotech companies, in particular, have worked together in the past to hammer out the myriad rules and regulations necessary for the approval of recombinant protein- and monoclonal antibody-based therapeutics as well as genetically modified crops. Then, the various parties were faced with the considerable challenge of devising a regulatory framework for products that simply had not existed before. Now, they are tackling an even trickier area – and one in which the scientific learning curve may be even steeper.
Though significant progress has been made in pharmacogenomic research, the field is still in its earliest stages. Some biotech firms have pushed ahead with clinical studies, designed to link individual gene variations with health factors or drug responses. Others are seeking markers present in biological fluids to aid in the identification of the biological pathways involved in disease and therapeutic response (for details, see the Signals article, “Pharmacogenomics Gets Clinical.”) Yet, many companies (and virtually all big pharmas) have hesitated to embark on large-scale pharmacogenomic studies, because it hasn’t been at all clear which types of data the FDA will require – or how it will use them in reviewing and judging NDAs and BLAs.
But the agency wants to provide clear guidance, and it intends to play an active role in the evaluation of pharmacogenomic tests, when used in conjunction with drug therapies. Its recently issued draft guidelines are a major step in that direction. They are also the result of an unprecedented collaboration between the FDA and industry: The agency held its first workshop, cosponsored by industry groups, in May 2002 to identify the key issues that needed addressing. The most recent workshop, cosponsored by the Drug Information Association (DIA), was held in mid-November 2003, shortly after the draft guidelines were published in the Federal Register.
According to Geoff Ginsburg, VP of molecular medicine at Millennium Pharmaceuticals Inc., who participated in the DIA workshop, “It was a very successful meeting, with over 500 people there. It was not just symbolic [on the part of the FDA]. The FDA is taking a serious stand on pharmacogenomics. It’s trying to do what’s right,” by interacting with industry and “soliciting frank and appropriate feedback.”
“The draft guidelines are really balanced and thoughtful,” Ginsburg continued. They represent “the first attempt to use pharmacogenomic data as part of the submissions process.” They also describe how the FDA might learn from the data and use them in the future, even post-approval, he added. And, the agency is also addressing issues surrounding intellectual property (IP), and how it can “begin to share its learning with industry while protecting IP.”
A good start, certainly. But, Ginsburg cautioned, this most recent, "post-publication" conference made it obvious that there will be an “extensive revision of the guidelines” as drug sponsors and the agency continue to fine-tune the particulars.
So, what exactly do the draft guidelines say about pharmacogenomic data submission?
For one, they are quick to point out that very few established pharmacogenomic tests actually exist today. Only a handful – primarily related to drug metabolizing enzymes such as thiopurine methyltransferase – “have well accepted mechanistic and clinical significance and are currently being integrated into drug development decision making and clinical practice,” according to the draft guidelines. These are known valid biomarkers: The biomarker is valid “if it is measured in an analytical test system with well established performance characteristics and there is an established scientific framework or body of evidence that elucidates the physiologic, pharmacologic, toxicologic or clinical significance of the test results,” the guidelines state.
If a drug sponsor has data on known valid biomarkers, which are “appropriate for regulatory decision making,” then the FDA wants those to be submitted with the IND. (The draft guidelines also outline the requirements for submission to new NDAs, BLAs and Supplements, as well as approved NDAs and BLAs, which differ according to type.)
All other pharmacogenomic tests and biomarkers are exploratory. Some, the agency has classified as probable valid biomarkers, “those that appear to have a predictive value for clinical outcomes, but may not yet be widely accepted or have been independently replicated.” If a company has data on this class of markers, then the guidelines recommend that it submits the information along with unapproved NDAs or BLAs. The data don’t have to be submitted with INDs if they weren’t used in making the decision whether to submit the IND.
There is a third class, as well, which encompasses exploratory data. Pharmacogenomic testing programs intended to develop the knowledge base necessary to establish the validity of new markers are not useful in making regulatory judgments and therefore are not required to be submitted, according to the draft guidelines. However, since the information “also advances the understanding of relationships between genotype and gene expression and responses to drugs,” the agency is encouraging voluntary submission of exploratory data. The FDA will not use this information for regulatory decision making on INDs, NDAs or BLAs.
However, having the data in hand will help the agency to develop its own expertise, which will be critical for informing its regulatory decisions in the future. It will also allow FDA scientists (who review submissions) to keep on top of new developments in a fast-changing field.
Companies that are actively engaged in pharmacogenomics – whether they are developing specific, drug-related tests for experimental therapies or working on improved, reliable, reproducible assay technologies per se – now have a starting point from which they can mold their programs to conform to regulatory requirements going forward.
And for Millennium Pharmaceuticals, the draft guidelines are like music to its ears. “We are very delighted,” Ginsburg said. “Millennium was built on a genomics strategy; we’ve always had a pharmacogenomics theme.” In fact, the company spun out Millennium Predictive Medicine early on to aid the development of disease-associated diagnostics and pharmacogenomic tests. In June 2000, it brought the subsidiary back into the main company to facilitate an integrated approach to its drug discovery and development efforts.
One of Millennium’s earliest forays into pharmacogenomics centered on Velcade, a proteasome inhibitor that was approved in May 2003 (based on Phase II trial data) for treating multiple myeloma. “We conducted pharmacogenomic studies in Phase II trials to determine [which patients] might be more likely to respond to the drug or have resistance to it,” Ginsburg explained. The company presented data relating to several genomic markers that may have a predictive value for drug response at the 2003 meeting of the American Society of Clinical Oncology (ASCO). “We have a candidate set of predictive markers,” he said. “Now we have to try to prove and validate them in Phase III trials, which are ongoing right now.” And if the markers are validated in a larger patient population? “We will be excited to have a discussion with the FDA,” Ginsburg said.
Like Millennium, many firms are tracking down biomarkers that can be used as the basis of a pharmacogenomic test that can predict clinical outcomes. And, like Millennium, many companies searching for markers use Affymetrix Inc.’s microarray GeneChips for expression profiling of thousands of genes per cell or tissue sample. These results are often not reproducible from one lab to another, however, making it difficult to compare them. And, as the draft guidelines point out, standardized assays will be essential when it comes to regulatory decision making.
Indeed, a move to standardized assays is underway, but it’s not a given that a high density microarray platform will become the acceptable method for generating genomic data suitable for submission with INDs, NDAs or BLAs.
“The guidelines haven’t defined any specific assay; they just say it has to be a widely accepted assay,” explained Bruce Seligmann, president and CEO of High Throughput Genomics Inc. (HTG). “The gold standard is PCR, but it has a number of problems, including reproducibility lab-to-lab.”
“There’s a drive for higher quality assays,” he continued, especially in toxicology, where they are used to study the effects of drugs on gene expression. “The high density arrays (from Affymetrix and Agilent) may be only 10 percent concordant. These are not validated data; they must be validated with real-time PCR, meaning that the quality of the data is lower and the number of samples is less.”
Unsurprisingly, HTG thinks it’s got the answer to this problem. The firm’s high throughput multiplexed array measures mRNA instead of DNA (although it can do that, too). And, because the method includes a nuclease protection step, the RNA doesn’t have to be isolated from cells or tissue samples before it’s analyzed. Thus, small samples can be processed directly in the microplate wells, greatly simplifying the procedure.
Seligmann said that HTG’s technology is also highly sensitive, reproducible and fast. “The multiplexed array is programmable. A user can program the microplate to measure the same 16 genes in each well or 128 genes across 8 wells.” This means that one person can analyze about 2,000 samples – or 32,000 data points – in four hours, he explained. “Basically, we can do a tox experiment in two weeks. PCR would take months.”
The assay also allows researchers to “run a lot of replicates. There are layers of repeatability, which reduces the variability,” he said. And reproducibility is key for any assay that hopes to win FDA’s approval as the method of choice.
Well, it will probably be a number of years before the agency settles on a standardized assay. And it may take some time for it to produce the final regulations -- which are set in stone -- on pharmacogenomic data submission.
For the draft guidelines are, indeed, a draft: Extensive revision is almost a given, as companies dissect the document line by line to see how it applies to their own internal product development efforts and subsequently submit their comments to the FDA. But when the guidelines finally become regulations, the cooperative effort between the agency and drug sponsors should guarantee that they will be transparent to all.
originally published 12/03/2003