The Health Sciences Blog covers the latest trends and advances in life sciences and healthcare.

Recent Posts

Health Sciences

Highlighting Customer Success in Clinical One™

Imagine a research world in which you have worked tirelessly to finalize a clinical trial protocol and need to build a randomization and trial management system (RTSM) for that protocol to achieve a critical first patient in milestone.   Typically the build of such a system would be a two to three month process, potentially putting important study milestone dates at risk.  Now imagine that same world in which the RTSM system is available a short 29 calendar days later.  While this may seem like a dream, this is reality for customers using Oracle Health Sciences Randomization and Supplies Management Cloud Service (ORS).  As part of Oracle Health Sciences Clinical One™ platform, customers are able to realize meaningful time savings in trial build and design. Looking in detail at how an ORS study build was accomplished in 29 calendar days, this was accomplished with a combination of the customer and Oracle Services Build Team.   The initial study design was built and shared with the customer in 2 days directly in the ORS application.  The customer reviewed the design directly in ORS, eliminating the back and forth of requirements specifications updates.  The customer requested changes which were also made in 2 days.   After the design was finalized, the customer was able to test the study build in 3 days, and also was in full control of uploading site, user, randomization and kit lists at any time directly into ORS. The success of this study build was highlighted by the quote of our customer, Xiaoping Zhang, VP of Biostatistics and Data Management, Clinical Research Hub, in a recent press release, “ ’We needed an established, standards-based cloud system to simplify the drug supply management process, and we have already determined we made a great choice by selecting Oracle’s Clinical One Randomization and Supplies Management. Within 29 days, we were fully implemented and are now in phase III of our oncology trial.’ ” The re-imagined Clinical One platform is providing our customers with never before seen capabilities, such as study build configuration driven through a user interface and elimination of the need for code development, for requirement specifications or for system validation.  These tasks are now replaced with real-time, online design capabilities. These capabilities allow study teams to focus on the key study aspects, as they see the trial design for the application and testing of the configuration, rather than a more in-depth validation need.   Coupled with an Integration Hub Service, a streamlined user interface, and re-imagined training (short videos), the Clinical One platform provides the unified environment that our industry desperately needs to drive innovation and drug development.  Stayed tuned as Oracle delivers even more capabilities through our Clinical One platform! To read more about a recent customer experience using Oracle Health Sciences Clinical One Randomization and Supply Management Cloud Service, please read the full press release: Sichuan Kelun Pharmaceutical Automates Oncology Clinical Trial Set Up and Management with Clinical One Randomization and Supplies Management Cloud Service.    

Imagine a research world in which you have worked tirelessly to finalize a clinical trial protocol and need to build a randomization and trial management system (RTSM) for that protocol to achieve a...

Health Sciences

Artificial Intelligence, the Next Frontier in Life Sciences and Healthcare

Artificial Intelligence (AI) can detect conditions faster and with more accuracy than ever before.  AI knows if your loved one has Alzheimer’s before your family does.   AI can find heart disease earlier in patients and allow them to be treated before the disease has progressed beyond viable intervention.  AI can detect melanomas more accurately than trained dermatologists, resulting in an earlier and easier diagnosis at a time when it can be addressed before it spreads. With each passing year, the number of adverse event reports grows as much as 50% annually, taxing an already constrained safety process. However, the vast amount of safety data lends itself perfectly to AI.  Advances in machine learning and optical character recognition (OCR) technology coupled with machine learning through open-sourced frameworks is now cost effective and easily accessible in the cloud. Scalable, elastic, and sufficiently powerful compute, allows machines to perform tasks previously done by highly qualified people. This means that the rapid intake and understanding of documents that previously needed human intervention can now be automated without human intervention. Cases can be captured and processed faster and more cost effectively with more consistency at a far greater speed.  Additionally, for conditions such as newer products or newer diagnoses that result in lower confidence scores, medical professionals can be alerted to intervene; and the system can learn from these decisions. The goal is to augment scarce human resources by allowing AI to handle routine activities. AI is actively being integrated into Oracle life sciences products, with Argus Safety at the forefront of this endeavor. Leveraging natural language processing (NLP), machine learning (ML), deep learning, and other AI approaches, Artificial Intelligence features are being applied across the entire drug safety process. This includes: the intake and understanding of a report, product and event coding, assessing the confirmed case, and reporting to the various required destinations within regulatory reporting timelines.    Oracle Corporation is a core technology company with over $6 billion in annual R&D spending.  Oracle Labs, a division of Oracle, focuses on identifying, exploring, and transferring new technologies that have the potential to  improve Oracle’s business in a substantial way.  Oracle Labs develops upgradeable AI capabilities that can be seamlessly integrated, audited, and supported in products such as Argus Safety. Through competitive challenges and head to head proof of concept meetings, Oracle Health Sciences has demonstrated our deep pharmacovigilance domain knowledge (over 200 years). This coupled with our data science and machine learning expertise produce AI models that vastly improve operational productivity and quality, while maintaining regulatory compliance.  Altogether, these capabilities result in greater patient safety. Some of the immediate areas that AI features may be incorporated include: source document processing, case report narrative processing, call center log adverse event flagging, product label processing, literature screening for adverse events, and signal detection. View our webcast on How Artificial Intelligence Will Revolutionize Safety here.

Artificial Intelligence (AI) can detect conditions faster and with more accuracy than ever before.  AI knows if your loved one has Alzheimer’s before your family does.   AI can find heart disease...

Health Sciences

Six Strategies for More Accurate Clinical Trial Forecasting & Budgeting

Currently there are over 131,000 registered drug or biologic clinical trials in process around the world.  With an average of $4 billion spent over the last 10 years on R&D for each new therapeutic molecule developed, worldwide pharmaceutical R&D future investments are expected to grow by 2.4% to $181 billion by 2022. With dollars of this magnitude at stake, it’s no wonder that there are huge and intensifying pressures on Life Sciences industry researchers to plan and forecast clinical trials accurately, while keeping variances between projected needs and actual performance low. Can Spreadsheets Be Trusted? Against this background, effective clinical trial planning is growing more challenging for trial Sponsors, given the greater complexity of trial design and the increasingly global nature of trials. As a result, leveraging manual processes and spreadsheets for trial forecasting and budgeting is no longer feasible. Spreadsheets just cannot keep up.  There are a number of reasons for this. Questionable data quality – Manually managing data in spreadsheets increases the risk of errors. Spotty resolution of errors - The spot-checking process of finding and replacing mistakes in spreadsheet data is unpredictable, resulting in additional data errors. Difficulty in capturing data complexity - Spreadsheets don’t manage text-fields, time-dependent data, or complex data well. It is also difficult for them to handle data groupings when the amount of data is not known in advance. Inferior versioning and collaboration – It’s hard to save/re-save spreadsheet versions, email files, and file names with dates and initials, which leads to chain of custody issues resulting in changes being made by different stakeholders to different versions of the file. No support for advanced analytics – Data investigation capabilities within spreadsheets is limited to logical actions, arithmetic, and recap statistics. Smarter Methods Study teams can avoid these issues with smarter, more strategic thinking.  Here are six strategies for more accurate clinical trial forecasting and budgeting. 1. Utilize standardized costing methodologies. • Rely on industry standards which have been collected over time. Benefit: Create accurate budgets in minutes instead of days.   2. Leverage industry intelligence to drive decisions. • Utilize aggregated information collected across the industry. Benefit: Streamline negotiations by leveraging independent, third-party, industry metrics.   3. Implement a process for rapid scenario planning. • Avoid using multiple spreadsheets for various scenarios which is time consuming and error prone. Benefit: Quickly create and compare multiple study scenarios.   4. Automate bid comparisons. • Stop comparing inconsistent vendor bids across multiple spreadsheets for best trial outsourcing options. Benefit: Negotiate CRO bids and shorten contract closure timelines.   5. Replace manual spreadsheets. • Find a solution that eliminates spreadsheets and manual planning. Look for a technology partner who can provide a superior alternative that is faster and more accurate. Benefit: Reduce the risk of manual errors and expedite the entire process.   6. Simplify your portfolio view of costs and resources. • Leverage a solution that aggregates all of your planning details across studies in one, centralized location. Benefit: Understand critical path activities and resource demands across your portfolio.   By leveraging these six strategies, research teams can easily accelerate delivery of accurate, defensible,and achievable study budgets. These strategies can also shorten the time it takes to get a study underway by a considerable amount. Are you using smart strategies to simplify your trial budgeting and forecasting? Learn more about how Oracle Health Sciences ClearTrial Plan and Source Cloud Service realizes these strategies.  The solution saves users time and money by equipping study teams with tools for quick and accurate planning, forecasting, and outsourcing of clinical projects. Contact us to learn more about for your forecasting and budgeting needs today.    

Currently there are over 131,000 registered drug or biologic clinical trials in process around the world.  With an average of $4 billion spent over the last 10 years on R&D for each new therapeutic...

Health Sciences

GDPR and the Right to Be Forgotten

As a pre-sales consultant working within data management for many years, I’m often asked how Oracle products adhere to the General Data Protection Regulation (GDPR). If like me, you’ve been swamped with GDPR requests leading up to the GDPR start date of 25 May 2018, you may have wondered why you received these emails and how they impacted the use of your personal data.  Perhaps, working in life sciences research, you also questioned the impact of GDPR on clinical trials and subject data. GDPR Ground Rules Designed to help individuals control their personal online information, GDPR reduces the abuse of personal data and unwanted sharing of data between organizations.  It ensures a common frame work across the European Union (EU), which makes it easier and more cost effective for companies to do business. The new regulatory framework applies to all EU citizens, regardless of where they currently live.  This means that personal data has to be collected, managed, and used much more carefully. The GDPR (Article 4) defines personal data as any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person. Failure to meet the obligations of this new regulation can be significant with fines up to €20 million or four percent (4 %) of annual turnover. This article explores one aspect of GDPR, specifically, the right to be forgotten, and how it is implemented for the collection and use of clinical trial data. Clinical Data and Informed Consent Clinical trials require consent from participating subjects.  Informed consent, which explicitly allows sponsors to collected data from subjects, as well as details the associated risks with trial participation, has been part of good clinical practice for a number of decades.  Data Privacy in Clinical Trials Prior to GDPR, local country-specific, data privacy laws across the EU have limited the information collected requiring data to be anonymized. Subjects were identified by a number and limited identifying personal data was collected.  However, as this process was implemented on a country by country basis, it led to inconsistencies across the EU. GDPR as Part of Clinical Data Collection, Practical Issues One of the provisions of the GDPR is the right for one’s data to be forgotten, even if it was provided previously.  For example, it is currently assumed under GDPR that a subject can withdraw from a clinical trial. This could now also mean the removal of subject specific clinical data from the trial dataset. For regulated clinical trials, this could pose a problem.  Trials are suitably and adequately empowered to demonstrate statistical significance while minimizing unnecessary subject exposure.  If a study is nearly completed and a subject withdraws his/her -- consent including their personal data -- this has the potential either to extend the duration of the trial or even worse de-power a locked study, render the study data useless. Implementing GDPR for Clinical Studies Good clinical practice (GCP), section 4.9.0, requires that ‘source data should be attributable, legible, contemporaneous, original, accurate, and complete. Changes to source data should be traceable, should not obscure the original entry, and should be explained if necessary (e.g., via an audit trail.’  This means that clinical data should not be permanently deleted.  GDPR recognizes this important regulatory requirement.  Paragraph 156 of GDPR, ends with the statement ‘The processing of personal data for scientific purposes should also comply with other relevant legislation such as on clinical trials.’  In layman’s terms, this means that the requirement to follow GCP takes precedence over the right to be forgotten.  Article 89 of GDPR also considers collecting data for the purposes of public interest and scientific research.  It allows for data to be collected and re-used, as long as appropriate controls are in-place, such as anonymization, and for the full removal of personal identifying information (for example, in secondary processing and analysis).  Thus, GDPR provides a standard framework of personal data collection for all EU citizens. Summary Companies still need to assess technology, processes, and procedures around the collection, processing, and storage of clinical trial data.  They also need to ensure that the technology has the correct security and that the data is not shared or processed without the appropriate controls.  GDPR provides strengthened controls for all EU citizens on a global basis, and thus, has far reaching consequences.  GDPR provides a pragmatic approach for managing clinical and scientific data retention and security. By obtaining informed consent, sponsors have the right to collect and use data for scientific research in a planned and controlled way. Oracle Health Sciences has reviewed the requirements of GDPR and provided a summary of how GDPR impacts each of our products.  These can be viewed by our customers here. In my next article, I’ll look further at this topic and the impact on Informed consent.

As a pre-sales consultant working within data management for many years, I’m often asked how Oracle products adhere to the General Data Protection Regulation (GDPR). If like me, you’ve been swamped...

Health Sciences

Business Transformation, Value, and Success

The success of any business is in large part dependent on the success of the customer.  The customer both defines their success, and  largely, the means by which it is realized.  As a software vendor, Oracle Health Sciences is key to catalyzing value realization, not only through the provision of software and supporting services, but also through the process of business transformation, which in our domain, is both common place and essential.  This is further complicated given competing expectations from multiple internal and external stakeholders, which may lead to misaligned objectives.  Managing expectations successfully across multiple lines of business and across both the customer and the vendor organization is a guiding principle for being able to deliver a win-win scenario for all stakeholders. Given the exhilarating pace at which clinical science is moving forward, promising to dramatically alter future healthcare outcomes, sponsors must make deeper and broader use of their data, their people, and their processes.  Hence the increased focus to re-engineer current business processes using innovative technology and re-imagined data science driven roles. Helping realize this clinical R&D ambition using technology has proven to be complex, expensive, and risky. The topic of this article is to look at why IT projects don’t always deliver the business benefits initially envisioned with complex, strategic business transformation initiatives, and how Oracle is addressing this delivery gap with a mechanism to measure customer success and satisfaction with our products. Business Transformation and IT Project Failure Recently, I came across an interview from the new Novartis CEO who discussed what is becoming a mantra across the life sciences industry, namely, the transformation of clinical R&D to support new digital, patient centric research paradigms. Figure 1.  Clinical R&D is becoming data centric and digitally enabled. It is one thing articulating vision; but, it is quite another to develop a strategy to realize and deliver it across the enterprise.  Digital transformation in any industry is costly, risky, and time-consuming.  Where IT is required and plays an essential role in the transformation of the business, it is more often the case that such strategic initiatives do not always deliver the promised results.  The Standish Group periodically publishes an insightful report that describes the reasons for the failure of IT projects to deliver prespecified business benefits.  It is quite astounding that only a third of IT projects is deemed to be successful, while the remainder fail or are challenged. Figure 2. How successful are IT projects?* Looking more closely into the underlying reasons for this abysmal rate of success, it is suggested that involving end-users and working with them is a principal driver to mitigate against project failure.  Furthermore, optimizing project delivery milestones with sub-project tasks and clearly identified and aligned business objectives also significantly influences project success.  Using a consistent group of integrated practices, services, and products also significantly influences project success, which, in many cases, is included as part of a project implementation and delivery methodology. Figure 3: List of project success factors* It is worthwhile to understand some of the definitions listed in Figure 3. They are described by Fig 4. Figure 4: Definition of project success factors* It is important for any successful IT project that there is strong user involvement and that an integral part of the project delivery methodology is designed to deliver a consistent, repeatable customer experience that is aligned with evolving customer expectations. Realizing Customer Success with Oracle Data Management Workbench In the clinical data management area of many life science companies, there is an inevitable desire to undertake some element of business process transformation to achieve new operational efficiencies.  Typically, many organizations are challenged with process inefficiencies relating to: Managing clinical data flow by removing bottlenecks, process latencies and redundant processes. Providing clinical data to stakeholders to make the best decisions as quickly as possible. Dealing with change over the lifecycle of the clinical study so that change does not lead to delays and costly operations. Oracle Data Management Workbench (DMW) is a solution that offers the potential to deliver significant business benefits by addressing these process inefficiencies.  Using this solution, customers can re-engineer their business processes by configuring the solution with automation workflows, libraries, templates, and standards. However, implementing DMW can be accomplished in a number of ways, primarily driven by the customer’s specific business processes and their business priorities.  Give the historical reasons for challenged projects, it is therefore important to involve the customer to help define project success and establish a baseline upon which the project success can be measured accommodating implementation variability.  Using our extensive experience Oracle Health Sciences has developed a process and success measurement scorecard to help deliver Oracle DMW projects. Customer Success Scorecard To achieve a level of consistency across current customer implementations and to help customers identify and proactively manage critical projects success factors, Oracle has developed a customer scorecard that can be used to align implementation objectives and the customer’s business priorities.  Using the scorecard as part of a customer engagement process to help deliver customer success is now an integral part of the implementation delivery methodology and continues throughout the customer implementation life cycle. The primary objective of the customer success scorecard is to document known risks and to identify misaligned objectives between both the customer and the IT vendor on an ongoing, periodic basis. Figure 5.  Customer success scorecard.  The scorecard describes the categories across which customer objectives can be aligned with the implementation objectives.  The scorecard is completed for customers prior to the implementation and periodically updated post implementation for subsequent business releases. By working with the customer, engaging in a structured dialogue that systematically considers different implementation tracks, and asking detailed questions that are documented, it is possible to identify misaligned expectations and to implement remedial measures.  The scorecard provides an early warning system highlighting areas of weakness between organizations and helps to structure the different considerations required to make the customers successful. The scorecard is intended to be completed for each customer prior to the implementation of a DMW business release to baseline current performance with the existing business process.  It is intended to be updated post implementation on a periodic basis to track operational efficiency and identify any gaps in the delivery methodology.  By following this process, it is intended that the level of customer engagement, together with a strong focus on business objectives, can ultimately lead to more successful customers and more effective DMW implementations. * Images 2 though 4 can be found at this url:  https://www.infoq.com/articles/standish-chaos-2015

The success of any business is in large part dependent on the success of the customer.  The customer both defines their success, and  largely, the means by which it is realized.  As a software vendor,...

Health Sciences

Evolving AI for Safety and Multivigilance

Today, as changes in safety regulations worldwide generate a significant increase in the number of adverse event (AE) cases that need to be processed by life science companies, the variety of big data sources that can be mined for safety signals is also rising. These trends translate into huge, new pressures on safety organizations as they continue to carry out their mission of multivigilance (the umbrella term for pharmacovigilance or drug safety, vaccine safety, and medical device safety). The multivigilance process covers the entire lifecycle of a medical product, from clinical trials through post-marketing surveillance.  Currently, Oracle Health Sciences is working with artificial intelligence (AI), machine learning (ML), natural language processing (NLP), and Deep Learning technologies to address the trends of increased case volume and additional signal sources. Have you ever heard the phrase, everything old is new again?   Most likely, it’s because it reminds you of the Peter Allen song by the same name from the movie, All That Jazz. The strategy of that phase is truly universal because Oracle Health Sciences, working hand in hand with Oracle Labs, has translated it into new approaches for next-generation multivigilance. Oracle is accelerating cognitive computing concepts and applying them to both safety case management and safety signal management. Today, these technology innovations already can play a major role in creating a cheaper, faster, and more efficient case management process, as well as making medicines safer for patients by detecting risks earlier. Artificial Intelligence through a Safety Lens Oracle is applying AI methodologies such as NLP, ML, and Deep Learning to the areas of safety and multivigilance. Two such examples are the extraction of AE information from unstructured data to automate manual processes, and the identification of AE signals in diverse data sources. Oracle’s subject matter experts have many years of experience in both safety and AI. With the ultimate goal of embedding AI across the end-to-end multivigilance workflow, for both case management and signal management, subject matter expertise is key. In the 1950s, modern work on AI began in earnest. The computer was trained to learn patterns and relationships in data, instead of humans pre-programming the rules for its algorithms. Initially, NLP systems offered a very basic interpretation of word sequences and contexts. So, results were not highly accurate. Neural networks are part of a wider class of machine learning techniques inspired by the function of neurons in the brain. Today, with advanced computing power, Oracle uses deep learning, which is based on large architectures of neural networks designed for learning exceedingly more complex data representations. This, in turn, enables improved accuracy in machine learning prediction and classification.  Deep learning is based on several, different, architectural approaches that are typically organized as layers of neurons, as shown in Image 1, and which are used to simulate very simple versions of brain functionality.                              Image1                               Image2 Above in Image 1, each small circle is a node representing a neuron (as in Image 2) that essentially acts as a filter or gate that allows the passing of signals from one neuron to another.  It takes in as input a numeric value, processes that value, and outputs another numeric value. When the output value is high, the neuron is said to fire, sending information on to the next level neurons.  Generally, including more neurons and more layers in a neural network, allows for higher order learning.  In mathematical theory, Deep Learning can achieve 100% accuracy. In practice, for certain tasks, and with a suitable neural architecture, deep learning can attain close to perfect accuracy. This is one of the main reasons for its growing application. Before creating an automated system that can recognize AEs in unstructured free text, researchers have to teach the algorithms to recognize what an AE is, using large sets of training data. Here is where the critical element of domain expertise comes in, to refine the program’s understanding of language and context. The Role of Domain Expertise in Safety AI There are three areas where safety domain expertise, in addition to AI expertise, is essential in building a solution that works well. These are: training data preparation, model design, and lexical resource creation. The quality of the solution depends on the quality of the training data.  In the example of AE case intake, information can come from healthcare professional (HCP) reports, consumer reports, literature articles, health authorities, clinical trials, patient support programs, social media posts, and more. But until now, humans have had to read these narratives manually, pull out the AE information, and enter it into database fields. Today, domain experts can curate training data that are needed by AI applications to extract the AE information automatically. Domain expertise is also an important element in the design of classification or prediction algorithms (models). For example, determining which data elements should be used by the algorithm, determining which data transformations are necessary, and in NLP, determining what context of nearby words the algorithm needs to examine in order to distinguish between true AEs and other medical terms that are not AEs (such as indications and historical conditions). Domain experts are also key to stitching together a complete model from a hierarchy of individual algorithms that each do one, particular task well, such as, detecting which check boxes on a form are ticked or identifying a patient’s birth date. Lexical resources (such as dictionaries of medical products, medical conditions, HCP occupations, English names, consumer health vocabulary, etc.) are used to enhance the performance of learning algorithms---a mechanism for incorporating existing domain knowledge into the learning process. Domain expertise is needed for the creation of such lexical resources and for their integration with the learning algorithm. By including safety domain expertise, researchers can achieve much higher quality and much greater accuracy for AI applications in the multivigilance space. Oracle and AI in Multivigilance Oracle is currently combining multi-layered, deep learning, cloud architecture with its extensive safety domain expertise to explore how these AI methodologies can automate AE case processing intake and detect signals earlier. As the only software developer with decades of experience in both multivigilance and AI, Oracle provides its customers with a very competitive solution. The exploratory project, Oracle Safety One Intake, aims to address issues including: the time-consuming, inefficient, and error-prone manual processing of incoming AE reports; the problem of multiple intake methods creating multiple work queues; the disparate kinds of data privacy protection offered via different methods; duplicate detection issues; and the manual workarounds for the staging of follow-up information. In addition to case processing, other safety AI projects under consideration include call center log AE flagging and multi-modal signal detection. The Past as Prologue By building on existing AI technologies, advancing them, and applying them to the domain of safety, Oracle enables the vision of the past to become a prologue to the future for multivigilance. Through advanced computing, Oracle is evolving the original approaches to machine learning and neural networks into the next generation of safety technology. And, with Oracle’s domain expertise in both safety and AI, real value can be achieved today for customers. For immediate AI benefits, Oracle Health Sciences Consulting can deliver safety solutions today that use the core Oracle AI engine. Contact Oracle if you would like to schedule a demonstration of the Oracle Safety One Intake prototype, participate in the beta test of Oracle Safety One Intake, or find out how Oracle Health Sciences Consulting can implement AI in your organization right now. View a webcast recording about AI for safety here. Learn more about the Oracle Health Sciences Safety Cloud here and here. Safe Harbor Statement The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle.  

Today, as changes in safety regulations worldwide generate a significant increase in the number of adverse event (AE) cases that need to be processed by life science companies, the variety of big data...

Health Sciences

The DNA of Genomic Data Protection

When people upload their genetic information to a genealogy site, they often hope it will lead them to a relative who’s famous. They probably don’t expect that relative to be infamous – or the leading suspect in a 40-year-old “cold case” involving mass murder. Yet that’s just what happened earlier this year, when police in California turned to publicly available DNA data in a long-shot effort to solve the “Golden State Killer” murders. Detectives plugged in DNA profiles from crime scenes in the case and got back several partial matches of likely relatives. Those became starting points for a “family tree” of the killer, eventually leading to a man police observed until they could obtain a known DNA sample, which  they said match the profile from the crime scene. While the police and the families of the victims celebrated, the news also led many consumers to become aware of just how easily the health data they produced for one purpose could be used for another – and without their knowledge. That underscores the ever-growing problem of data security for personal medical information. The patient information in modern medical databases holds enormous potential in diagnostic and treatment decision support in healthcare institutions. It is crucial to developing innovations in risk assessment, outcomes improvement, and clinical efficiencies. The information is also highly sensitive because it reveals so much about an individual. As the volume and sophistication of the data grows, the industry faces heightened scrutiny from consumers and increased regulation to protect privacy. That, in turn, has created an enormous need for the industry to address the complex security challenges of this data, protecting patients without losing the value that the information offers them, and the healthcare system, itself. Healthcare institutions and their partners need to make smart planning and technology decisions to stay ahead of the problem. This is an area where technology companies can partner with the healthcare industry to combine security and innovation to drive forward these initiatives while minimizing risk. While the cloud offers the greatest potential for security and sharing, the industry is moving there cautiously, given the high stakes. Many hospitals and other providers are seeking interim solutions that provide an array of security, identity, and access management tools that work with their on-premise IT today, as well as in an eventual transition to the cloud. Often, these institutions will first attempt to create their own security and identity system using their in-house developers. Others will try to address the problem by adding a series of single-point security tools from a wide variety of vendors to each subsystem or individual database. There are some fundamental problems with this approach, and they often cause institutions to abandon the attempt. For one thing, building your own security and identity system across all of a healthcare institution’s functions is a difficult undertaking, one that requires expertise that many institutions lack. That complexity, and therefore the time required to complete the job, make it an expensive undertaking as well. At the same time, building a system in piecemeal fashion is likely to create inefficiencies, because it requires duplicating functions, like auditing, at each subsystem or data base. Most importantly, “build your own” is an approach that actually can increase security risks. For example, imagine a hospital that wants to ensure it is maximizing value-based care. That means its analysts are routinely federating data from a number of different systems, checking patient records for proper identity and potential duplication, pulling provider notes, and cross-referencing with cost data. Every one of those data requests presents potential for a security problem, and a weakness in any subsystem could expose a significant amount of patient information. Oracle takes a different approach by putting all the various data sets into a single, aggregated, data warehouse that has healthcare specific security, authentication, and auditing tools embedded. Now that same analysis is not only more efficient, but also, more secure. For example, sophisticated access control protocols can ensure that this sensitive health data is available only to certain users under specific circumstances. This can be combined with data masking that can obscure portions of the information that the user isn’t authorized to see, as well as encryption that protects the information in the case of a breach by hackers. It also reduces the auditing burden. – It’s much easier to monitor access to a single environment than to several. Finally, this approach positions institutions for an eventual transition to the cloud. Both the data and the analysis tools used on-premise can be relocated and used from the cloud, along with increased levels of security made possible by the enormous resources involved in cloud management. Instead of institutions having to create and manage their own data security and privacy technology, they can turn the task over to experts who are managing those issues across the industry and in alignment with changing global regulation. Whether the concern is EHRs, genetic profiles, or any other health-related information, security and privacy concerns will only keep getting more serious. Making the right technology choices now will help institutions avoid serious issues today and into the future, while taking advantage of the power of technology and data to do better for patients and the bottom line.         

When people upload their genetic information to a genealogy site, they often hope it will lead them to a relative who’s famous. They probably don’t expect that relative to be infamous – or the leading...

Life Sciences

Clinical Trial Automation, Standardization, and Collaboration - Commentary on a Recent Snap Poll

A common set of goals exist for all who touch a compound, drug, or clinical study: bring safe, effective treatments to patients quickly, and keep costs down while doing so. A recent poll addressed just one component of managing and executing a clinical trial: the data.  With a goal to gain a better understanding of  how efficiently and cost-effectively massive amounts of data are shared between trial sponsors and CROs, we asked three questions: 1. From request to receipt, how long does it take to get clinical trial data from your CRO partners? 2. How much does it cost you each time you ask for a data pull from your CRO partner? 3. Do you feel like you have a single source of truth for your trial data? Understanding that data moves between CROs and trial sponsors, as well as around a company to various stakeholders, in a variety of formats, we wanted to look at the time, cost, and risk associated with the movement and sharing of clinical data. Real-Time Access to Clinical Data: Truth or Legend? In responding to our first question, over 65% of poll respondents indicated that it took over five (5+) weeks to get trial data from their CRO partners. With that lengthy turn-around time, how was it possible to respond quickly to any problems with the data? And, if you’re working on a set of data, it’s highly unlikely that you have the most recently cleaned, validated data. In those situations, how can you ensure that you’re able to respond quickly to challenges when you can’t be confident that you have the data you need, when you need it? One respondent indicated that he had reached the Holy Grail, real-time access to data for both the CRO and the trial sponsor. If all trial sponsors had that kind of access to their data, imagine the efficiencies they would gain, and their reduction in costs. Requesting Clinical Data: What’s the Cost? Our second question asked about the cost of sharing data. With increasing budget pressures (and decreasing budgets!), we wondered if sharing and acquiring data was as cost-prohibitive as we thought. A handful of our respondents weren’t sure of the cost of sharing data with their CRO partners, but more than 50% indicated that it cost between $10k and $30k every time they requested data from their CRO, and 13% shared that it cost more than $30k for each data request. The caveat: a certain number of data requests are written into the trial sponsor/CRO contract. Ad hoc requests are a different story; and those can add up, especially as data moves around the organization and additional data points are needed to help support the clinical trial. If you have to request data multiple times, waiting five weeks each time, the trial faces delays; and the associated cost of both the data and the trial delay grows and grows. What if you could access data real-time, at no additional cost? Simplifying end-to-end data flow, because your CRO partners aren’t the only stakeholders with an interest in the clinical data, will lead to lower costs, higher quality data, and less risk. Clinical Data Volume and Variety: What’s the Truth? Finally, we asked about a single source of truth. Fifteen percent indicated that they feel they have a single source of truth for their clinical data, though 82% shared, sometimes accompanied by laughter, that they do not. One response noted that by source or data point that they do, but not one source for all data. That brings up an important point. In addition to traditional data sources, there’s mHealth, electronic health records (EHR), and real world evidence (RWE) data that provide invaluable information to a data scientist. This data comes in a multitude of formats and needs to be aggregated and then standardized to actually be useful. If this could be done automatically, imagine the potential ease in making important clinical decisions quickly. As the variety and volume of data increases with the addition of data sources like mHealth, EHR, and RWE, and as the number of people touching the data in a clinical trial increase, technology needs to adapt. To aggregate, clean, standardize, and trace the lifecycle of a data point, and then to deliver and share data with a single-source-of-truth, advanced clinical data management techniques and tools must be leveraged. Luckily, this technology exists today. So, researchers can address these challenges now for immediate, positive impact on cost, timelines, and mitigated risk.

A common set of goals exist for all who touch a compound, drug, or clinical study: bring safe, effective treatments to patients quickly, and keep costs down while doing so. A recent poll addressed...

Health Sciences

Clinical Trial Speak & Acronyms for Today-Part 1

Do acronyms distract you? When I’m reading and come across one I’m unsure of, it often forces me either to pause and wonder what it means, or stop reading completely to look it up.  Where do acronyms come from, anyway? After a little digging, I discovered that the first inklings of acronyms date back to before the Common Era (BCE!)  times. To me, that’s ironic, because I always think of these combos as typically socially-oriented and very current.  Terms representing an evolution of the very American need for shorter, faster ways of talking about older concepts (LOL, TMI, BTW, etc.), but also current ways of promoting new trends. (GDPR, IM).  Acronyms in Life Sciences In Life Sciences, let's consider, for example, EDC or electronic data capture. First introduced in the early 1990s as a remote data entry solution for laptops, EDC software allowed study coordinators to enter patient data into a system in real-time during a site visit, instead of having to capture information on a paper form and maintain paper files. With rise of the Internet in the mid-1990s, web based applications came next, eliminating the hefty expenses associated with deploying, supporting, and maintaining all those laptops at each site location.   The EDC landscape continued to evolve with more advances in web-based solutions and features. In 2013 the FDA introduced (eSource) guidance for capturing trial data electronically at the start of a trial and moving it to the cloud. Currently, the emergence of mHealth has pushed the EDC process still further. With the ubiquitous availability of mHealth devices/apps/wearables, allowing remote, wireless monitoring and upload of data to cloud-based eClinical environments, the EDC process has morphed again. At this stage, though, is the acronym EDC even relevant? Has the technology evolved beyond the trend that the EDC acronym once represented? The term “electronic data capture” effectively contrasted the old method it was replacing – paper data capture – 20 years ago.  But, what is the software doing for us now?  Today, EDC systems provide simplified setup and trial design, fast mid-study updates, real-time data access/visibility, continuous data management, and faster handoff to biostats. Is EDC still an adequate term for all this?  In addition, with new, unified eClinical, cloud based platforms, is EDC now just a feature? Or is it an element of something broader, like one of many capabilities that support the processes required to get a drug to market? At Oracle Health Sciences, we are moving in this very direction. We are replacing siloed point solutions for various clinical trial processes within a single, unified, clinical cloud platform.  We call this platform Oracle Health Sciences Clinical One™. But what will the Life Sciences industry call these unified trial platforms?  And more importantly, because we are all very busy, what will the acronym be?!   What do you think?  

Do acronyms distract you? When I’m reading and come across one I’m unsure of, it often forces me either to pause and wonder what it means, or stop reading completely to look it up.  Where do acronyms ...

Health Sciences

Top Four mHealth Advantages for Clinical Trials

Note: The following situation is imagined. The response to this kind of situation is real. The Situation The patient had an aggressive form of lung cancer and traditional treatments weren’t working. His doctor recommended him for a clinical trial that was testing a new therapy. Initially he turned down participation.  The problem was he had trouble breathing when he went to out to buy groceries. The idea of traveling to a distant trial site on a regular basis filled him with fears of an ongoing, exhausting situation that was possibly fraught with additional obstacles he couldn’t even imagine. The Response Patient Centricity & Site-less Trials The research team had a solution to the lung cancer patient’s problem.  They gave him a wearable wireless sensor that allowed him to participate in the trial from his living room. This made study participation easy and seamless, and because he didn’t face obstacles; the patient stayed with the study until it was completed.  The sensor connected by a mHealth application to his phone sent his vital signs via Bluetooth to the cloud, which connected the data to a clinical trial application that organized, managed, and analyzed it for the study. mHealth, the use of wireless technology and wearable sensors, makes prioritizing patient needs in clinical trials, or “patient centricity,” possible. Wireless apps like Oracle Health Sciences mHealth Connector Cloud Service  can monitor patients remotely on an ongoing basis and upload their data to the cloud.  Rather than a once a month “site visit snapshot,” the continuous tracking of vital signs (such as blood oxygen, blood pressure, temperature, etc.)  can provide a truer picture of response to a therapy with real time data on adverse events.  Due to its effortlessness for the patient, mHealth methodology may also lower the trial dropout rate.  Remotely monitoring a patient’s vital signs multiple times a day, combined with collecting pain scores via a mobile app, could also lead to identifying new digital biomarkers that could accelerate disease understanding. Fast Access to eSources and Trial Scalability The point of a clinical trial is to evaluate the safety and efficacy of a new drug for the market by monitoring patients' reactions to the new therapy.  However, the trial’s population group may have limited participant criteria that don’t represent all potential users. The relatively new availability of big data eSources (such as drug safety report databases, EMRs, and other medical literature) can expand and amplify the information on patient reactions/adverse events developed during the trial. But, incorporating and analyzing these additional terabytes of eSource data can pose major technology challenges for clinical teams. Oracle’s mHealth Cloud Service makes it easy for researchers to scale the trial, as well as acquire, monitor, and distribute those terabytes of real world data via a single platform in near real time.  Using a federated metadata approach, the app delivers the data to other cloud based eClinical capabilities for data capture and management  to gather, store, clean, organize, and prepare in comparable formats for a variety of advanced analyses, visualizations, and use cases offering enhanced trial results. In addition, as new digital workflows mature within an organization, they can be quickly deployed at scale across many trials using the same mHealth Connector platform, without the need for complex integration projects. mHealth has the potential to change the face of clinical trials. It is ushering in a new and exciting era of site-less trials that enable remote patient participation and continuous monitoring for more accurate insights. With its seamless ability to include third-party big data eSources at the click of a mouse, mHealth also offers the promise to enhance trial results, accelerate the regulatory process, and speed life-saving drugs to market faster. Learn how the Oracle Health Sciences mHealth Connector Cloud Service can empower your trials with big data support and make your studies more patient centric. See our video Read our white paper Contact Oracle Health Sciences today.

Note: The following situation is imagined. The response to this kind of situation is real. The Situation The patient had an aggressive form of lung cancer and traditional treatments weren’t working. His...

Health Sciences

Trials in the Cloud - Keeping the Focus on the Science, Not the Technology

Randomized clinical trials that assess the benefits and safety of a new drug or therapy are the gold standard in medical research. If well planned, they can offer assurances about a treatment that cannot be achieved in any other way.  Currently, though, a clinical trial can cost up to $2.5 billion and take up to 10- 15 years to complete. Add to these long durations and steep costs the traditional, siloed nature of clinical data capture/management applications and the new availability of huge (terabytes), of third-party real world evidence eSources for trial support, and you have an industry ripe for evolution. To lower costs, shorten cycle times, and get life-saving medicines to patients sooner, the technology supporting clinical trials needs to be reimagined. Today, the power and efficiency of a cloud-based service environment offers the best hope to achieve faster, less expensive, more scalable trials. The Best Hope Have you ever heard the phrase, the whole is greater than the sum of its parts? That’s what the cloud can do for clinical trial processes; make them better by creating a fundamentally new way for them to work together, rather than siloed and stand-alone. The cloud can provide a unified, secure, and scalable environment offering real time, universal access to centrally stored data that only needs to be entered once, common tools to be shared across all processes, and enhanced collaboration among all stakeholders, leading to higher data quality and greater productivity. Let’s take a closer look at how a cloud environment can help clinical trials of any size.  Portfolio Planning Traditionally, portfolio planning, building trial budgets and tracking resources, has taken months to research and set up.  Researchers need to answer questions including: Do we have the resources to forecast reliable spending? Is our portfolio optimized for time and cost? Are we paying fair market value for our outsourced studies? Using industry-specific algorithms for benchmarking/ball-parking, the cloud environment can quickly address these questions to create a unified and flexible clinical trial budget, track access to resources, and develop schedules for payments and regulatory submissions. Drawing on integrated, purpose-built, financial planning and sourcing tools for accurate projections, a cloud-based study build can add flexibility, efficiency, and transparency to study planning, forecasting, budgeting, and outsourcing needs. And, it can perform these tasks in days, instead of weeks. Randomization and Trial Supply Management Randomization enables researchers to remove selection bias from a trial. Historically, the process of finding eligible patients and assigning them to groups can take months to complete, sometimes even longer.  In a cloud environment, parameters for randomization processes can be set up quickly. Research into large electronic medical record (EMR) databases of potential patient candidates can be accessed and conducted in near real time. The cloud environment can offer self-service and on demand tools that can define and configure a randomization strategy, adjust trial limits, and set notifications.  It can also apply++ the strategy at the study, site, or region level, as well as be assigned to specific events in a study. Trial supply management ensures the right medical supplies are delivered to the right patients at the right time, while minimizing waste. Traditionally this has been a time-consuming process fraught with delays, missed details, and human error. In the cloud, trial supply management can be easy, immediate, and self-service.  Using guided tools, researchers can restrict who accesses supply files, who manages the drug inventory, and who is authorized to ship drugs to patients, making these functions more controlled and less a victim of “too many cooks”.  Authorized study team members using centralized overviews can forecast the supplies needed for subjects at the various clinical sites, create orders to the supplying depots, update inventory systems on the receipt of consignments, and provide much greater accuracy, proficiency, and timeliness when dispensing kits to subjects during their site visits.  Data Capture In past, when talking about clinical data capture, it meant collecting data from patients during trials and having clinical assistants enter the data into several different files. Initially done on paper by hand, data capture has evolved to electronic data collection. But this stand alone, siloed process is also full of redundancies, time delays, and errors. Today, in addition to study data, there are now a multitude of other third party eSources (such as medical records, biomarkers, labs, and many more) available to researchers to strengthen trial results. The biggest challenges to including portions of these eSources are the time delays and issues related to loading and integrating the selected, structured and unstructured data into trial systems.   In a cloud environment, drawing on several capabilities to aggregate, organize, clean, manage, and analyze data, these issues evaporate, making it possible to create a single source of truth and deliver the right data to the right stakeholder in near real time. Data Management Due to the isolated nature of traditional, siloed data systems within a trial, a lot can go wrong. Data sharing is difficult and the need for enterprise-wide responses to issues, such as information security and regulatory compliance often pose inefficient doubling or tripling of the work necessary by different stakeholders. Cloud computing offers real time, transparent visibility into the trial data, making it possible for all stakeholders to share data, address security issues, and respond to regulatory needs through shared screens as they arise. With the capability to organize, clean, and manage data from a single interface on a common platform, the cloud streamlines issues connected to lab management, data transformation, coding discrepancy management, data review, batch validation and more. It also allows researchers to check the performance of sites across regional/global geographies to data quality or trial risk issues and compare (anonymous) data across similar trials for results correlations and analyses. Trial Operations Traditionally, these systems focused on maintaining and managing up-to-date trial performance and reporting. Milestones for tasks such as electronic trial master file (eTMF), site scoring, and vendor management were siloed and time consuming. With a few clicks on an intuitive interface, the cloud can provide fast access to information from a wide variety of trial areas. Information from systems including eTMF, milestone documentation, source document reviews, site/trial monitoring, study training, and more can be shared and accessed by all stakeholders through a common set of functions. Risk Management Risk management, different for every trial, involves the monitoring of the people, processes, and technology associated with a given clinical trial. Tracking safety-related issues in a clinical trial involves many trial capabilities. It needs to be focused on key risk indicators (KRIs) and trigger management, subject level data reviews, study specific analytics, and targeted site support, all derived from the site level. Additionally at its core, risk management is also about tracking and pinpointing the risk factors of patient-related adverse events (AEs) and their causes as part of a larger regulatory record to meet compliance standards. Traditionally, risk management KRIs and AEs were kept in separate data silos, which made final reporting in a complete record time consuming inefficient, and often, not as accurate as needed.  Risk management in the cloud resolves many of these problems. The cloud environment is continually monitoring site data to identify risks based on statistical models. From a centralized monitoring screen adverse events can be addressed in near real time and protocol adjustments, per patient or per study, can be made at anytime from anywhere.  In addition, due to its ability to draw on new, relevant data sources with ease, the cloud allows trial teams to stay aware of recent regulatory guidelines as they occur. Also, risk data visualizations and analytics from a current trial can also be shared with a larger enterprise repository for use in future trials. Cloud Based Trials Taken altogether, the cloud not only re-conceptualizes the trial as a service, but it also makes the trial better, by reinventing how the technology runs the entire study. And within this secure environment, it allows the researcher to focus only on the trial science, not the technology. The global eClinical solutions market is expected to grow over 13% to $12 billion by 2025, reinforcing the industry’s confidence in cloud-based clinical trials. Supporting this vision, Oracle Health Sciences offers its Oracle Health Sciences Clinical One™ cloud-based eClinical environment, the only platform in the industry purpose-built from the ground up to support a trial from start to finish. Providing a series of capabilities, starting with Oracle Health Sciences Clinical One Randomization and Supplies Management Cloud Service, the platform unifies, accelerates and improves clinical trials, delivering better, safer, life-saving drugs to the market sooner. Want to advance your trials? Let Oracle’s cloud-based Clinical One help.  Contact us today.  

Randomized clinical trials that assess the benefits and safety of a new drug or therapy are the gold standard in medical research. If well planned, they can offer assurances about a treatment...

Life Sciences

A Short Call with the FDA

Did you know that over 15,000 clinical trials are currently active around the world, with another 45,000 trials actively recruiting?  This was the statistic provided to me by a representative from the U.S. Food and Drug Administration (FDA). I will share some of the interesting aspects of this conversation which directly influences how we view clinical data today, and the technological direction we need to move towards for the future. Clinical Data Volume 45,000 clinical trials generate an enormous amount of data.  Not all of the data is useful. Much of it is discarded or archived, as neither the clinical value of the information cannot be established, nor is patient consent provided for secondary use.  A quick calculation on the back of an envelope has suggested that over $1 trillion worth of data is generated and never used again.   A fundamental constraint with obtaining patient consent for secondary use means that there is a huge opportunity to collect data differently and to maximise its utility. In this regard the FDA is piloting a program to create a ‘data market’ designed to empower patients, healthcare providers, and insurance companies with the right balance of incentives to optimize healthcare outcomes.  A key consideration is the balance of incentives within a well-defined ecosystem that empowers each of the stakeholders to contribute and collaborate. Historical approaches that have tried to aggregate and share patient data using personalized portals and information vaults have not gained widespread traction.  Part of the reason for this failure has been the lack of incentives for the patient to contribute data, the lack of incentives for insurance providers to use the data to make meaningful and effective healthcare decisions for the patient, and the lack of incentives for technology providers to create a secure and robust data management ecosystem.  Ultimately, all of the stakeholders need to collaborate through sharing of data, interpreting/predicting outcomes, and identifying optimal treatment pathways that maximize patient safety and make economic sense.  On a side note, it turns out that the vast majority of health care costs associated with an individual are incurred within the last few months compared to the whole of life. This point is dramatically illustrated with the cost of oncology care. The Rising Cost of Healthcare Even when the very best care is provided -- and it is estimated that the cost of oncology care spend is expected to increase to over $150 billion per year by 2020 -- much of this expenditure is off target or ineffective.  Health care providers are unable to determine the efficacy of treatment due to a lack of data that correlates the indication, the genomic profile of the individual, and the treatment.  It is also apparent that this is leading clinical research development organizations to look at more ‘personalized treatments’ that clearly demonstrate value for money. So how can we make headway against these strong economic and scientific headwinds? Ready, Steady, Sequence One suggestion is to assist everyone to have their genome sequenced.  The current cost of sequencing of the human genome is a few hundred dollars. If everyone was sequenced irrespective of any medical condition and the data was made available this would be a step toward a more collaborative healthcare ecosystem. The total cost of sequencing all of the individuals in the US may reach up to $70 billion.  However, this is a small cost when you consider that between 50 to 70% of oncology treatment is off target.  Increasingly, the use of biomarker data is gaining momentum with a significant number of research organizations routinely bio banking clinical samples. Figure 1. May 2017 - Global Oncology Trends 2017.  Advances, Complexity and Cost By maintaining a centralized repository of sequencing data and associating the clinical healthcare record of the individual, it will be possible to make better clinical decisions across the entire patient population more effectively than ever before. Not only will an individual be able to understand whether a suggested course of treatment is predicted to be effective, using empirical data, the health care provider and the insurance payer will be able to optimize constrained financial and scientific resources.   Figure 2.  Biomarkers are increasing being used to segment Cancer in selected tumors Furthermore, an attractive side-effect with a centralized repository of clinical and scientific data is the possibility of fewer medical negligence and malpractice suites due to new levels of data transparency being offered.  By introducing new capabilities to broker and share data in a secure way the potential to use this data for secondary use can also be explored.  In this regard, the FDA is looking at Blockchain to support patient consent use cases throughout the clinical data generation, analysis, and submission lifecycle.  This also, of course, has extraordinary implications as clinical trials would routinely generate significant volumes of genomic data that would be in orders of magnitude larger than what is collected and analyzed today. Next Steps The FDA is preparing for changes.  It is anticipated that genomic data will become routine as part of the clinical dataset.  Sequencing data will be generated for each individual participating in the trial, and this will be associated with the resulting clinical dataset which will ultimately belong to the patient.  The patient will have control over all of his/her data and will be able to provide consent for secondary use across an ecosystem of clinical data consumers.  The FDA wants to build a marketplace in which patients will be able to share data using a common data platform that incorporates balanced incentives across the participating stakeholders. The platform would allow patients complete mobility to move their healthcare records from one healthcare provider to another.  It is envisaged that a patient should be able to have a complete physical geographic mobility, yet maintain a single clinical and healthcare data profile using one technology platform.  Using new technology, such as smart contracts and Blockchain, clinical data could be brokered to improve clinical and healthcare decision-making for the wider population.     

Did you know that over 15,000 clinical trials are currently active around the world, with another 45,000 trials actively recruiting?  This was the statistic provided to me by a representative from the...

Life Sciences

The Clinical Data Management Hurt Locker

I have been in the pharmaceutical industry for nearly 19 years. Most of my career was spent leading data management and database programming functions at a small oncology CRO, and then through acquisition, a larger biotechnology organization. Since I joined Oracle, I have been asked a number of times what the pain points are for a data manager, so in this article I will address this more broadly. Clinical Data Management Pain Points Data managers have a number of responsibilities. But at a high level, they are generally responsible for defining how clinical study data will be collected, and managed, in relation to the intake and quality of the data throughout the trial. Their goal is to deliver high quality data in a usable format to the appropriate downstream processes -- such as statistical analyses -- on time and within budget. While this sounds simple enough, it has become an increasingly challenging task for a number of reasons, such as changes in technologies, processes, regulations, and the growing complexity and diversity of the data. To manage all of this, data managers must have excellent project management skills to coordinate timelines and tasks across various functional areas, while working on multiple studies with competing priorities and deadlines. They must also be technically adept in order to spec out -- sometimes even build, and validate -- the data collection systems, as well as edit validation checks, data review programs, and other data outputs. Finally, to be successful, they must understand the therapeutic areas and trials that they work on, in order to define, understand, and clean the data in an appropriate way. Now that you have a broad view of the responsibilities and difficulties a data manager faces, let’s focus on one of the area in which Oracle solutions can help -- data quality. Can you identify the object in the picture below?       Yes this is a ruler. But more specifically, this is my data cleaning tool. It is a clear ruler with several sticky notes. I have very fond memories of sitting with my colleagues sequestered in a conference room for days, and even sometimes weeks at a time, performing listings review. Edit checks are great for finding data-point specific issues. But, some data quality issues can only be seen as one looks across the data, such as finding potential adverse events that were not reported, by looking at lab values over time. Sometimes, this even requires aggregation of data across multiple data sources, such as reconciliation between data management and safety systems, or combining local lab, central lab, and lab normal range data. Whatever the case, this process generally starts with the data manager defining a data or listings review plan. In this kind of plan, multiple types of reviews may be described, such as medical monitor review, SAE reconciliation, safety review, etc. Each type of review generally has a number of review items associated with it that describe what the reviewer should look for in the listings. Once the plan is defined, programmers develop the listings. While there may be some standardization, programming of the listings can be very time consuming and resource intensive. As a result, this level of data cleaning typically doesn’t happen for quite some time after patient enrollment, which can result in not finding systematic issues in a timely manner. Once the listings are available, that’s when sequestration and the use of the handy listings review tool depicted above begin. While some organizations try to automate the review by providing the listings in Excel, many still review using paper listings. The font is small and one patient’s listings can be hundreds of pages. Each page may have dozens of records and reviewers typically need to compare data across multiple pages or sections to find the discrepancies (e.g., comparing concomitant medications to adverse events). This is where my listings review tool was very handy. I could move the sticky notes to “filter” out columns not needed for whatever I was looking for and slide the ruler up and down to focus on specific records. While Excel may make it easier to sort and filter down to specific records and columns, it is still quite tedious to do cross-listing reviews. Once discrepancies are identified, they then need to be documented with clear provenance, and queries need to be generated. This means logging into the source system(s), finding the source record(s), and entering each query individually. As one can imagine, this is an extremely time-consuming, manual process and often requires a lot of follow-up with the original reviewer even to understand the issue.  Because the timeframe for this process is so lengthy, what the reviewer saw when he or she found the issue in the listings may no longer match what’s actually in the source. Overall, this process is extremely tedious and fraught with problems, which are compounded by the fact that it is not a once-and-done activity throughout the study. How Oracle Can Help Oracle Health Sciences Data Management Workbench (DMW) product can significantly reduce these pain points for data managers, as well as provide benefits to downstream stakeholders. Within DMW, data can be consumed from multiple source systems via direct and/or file-based integration. It not only supports creation of custom listings to address study specific needs, but also facilitates templates for standardizing different types of data reviews and output types (e.g., SDTM). Data can be aggregated across multiple sources and validation checks can be written and reused to further reduce manual effort. Data review can be performed directly through the web interface, allowing for sorting and filtering, as well as directly generating discrepancies on the data point(s) as they are identified. All provenance is automatically captured by the system, as is traceability of the data back to its source, even with complex transformations. With direct integration to an EDC system such as Oracle Health Sciences InForm, existing queries are visible within DMW and new discrepancies can automatically be written back to the source, as well. With these capabilities, data managers have near real-time access to the data and are able to identify and address issues more quickly and consistency, resulting in higher quality, more streamlined data flow. Further, DMW is a flexible platform that can grow with its users as they encounter new data sources and challenges in the future.

I have been in the pharmaceutical industry for nearly 19 years. Most of my career was spent leading data management and database programming functions at a small oncology CRO, and then...

Health Sciences

Innovative Technologies for Clinical Trials Showcased at Oracle Industry Connect 2018

Conference by and for professionals emphasizes patient-centered approach for industry Digital transformation is sweeping the health sciences industry, enabling companies to accelerate the pace of innovation. At the fifth annual Oracle Industry Connect conference, held in April 2018 in New York, attendees from all over the globe had an unprecedented opportunity to learn from experts – and one another – about the latest innovations driving that transformation in key areas including clinical trials. As a customer-driven conference, OIC delivered attendees insights from their industry peers facing the same challenges through presentations, panel discussions and while networking with hundreds of fellow professionals. Input from customers led to the creation of a streamlined, two-track format for 2018. That, in turn, attracted 20 percent more attendees than last year for a more focused program from which three strong themes emerged. Presenters discussed how data access via the cloud, combined with artificial intelligence and machine learning, is delivering faster, better and more actionable insights for researchers and investigators developing and managing clinical trials. A second strong focus was around how mobile technologies and the integration of their data are becoming a vital component of clinical trial modernization. Finally, speakers examined how technologies of today and those on the horizon will enable companies to fulfill their promise to take a truly patient-centered approach – one that extends from research into new treatments through more efficient clinical trials, all the way to care delivery. The patient-centric approach and experience is vital to this healthcare future, as noted by keynote speaker Jessica Melore. As someone who has experienced more serious medical challenges than most people experience in a lifetime – and survived in part thanks to becoming involved in a key clinical trial – Melore communicated her fierce advocacy for patients as the focal point for the industry. To her, it is vital that the industry significantly increase clinical trial awareness and participation among patients. As attendees heard during the program, the cloud is vital to realizing that patient-focused approach to clinical trials and the process of creating innovative treatments. As precision medicine and gene-based therapies for serious diseases and chronic conditions come to dominate R&D, they create a much more challenging clinical trial environment. The ability to collect and analyze information from the broadest range of data sources enables companies to gain insights and make increasingly intelligent decisions about the design and operation of clinical trials. The same cloud-based tools are key to finding and recruiting participants where there are increasingly specialized patient population needs – as well as making it easier for patients to participate, regardless of their location. That, in turn, can dramatically decrease time to market, helping patients sooner and reducing business risk. Mobile technology is a major component of this change. Wearable devices from specialized sensors to smart watches are becoming commonplace, providing a wealth of information. Patients can use that data to help manage their care, while providers and payers can use and analyze individual data to make better diagnoses, prescribe more effective treatments and help ensure compliance and better outcomes. In the clinical trial setting, mobile devices can provide a crucial link between participants and researchers and offer new and more flexible ways to design trials. Mobile devices also represent another enormous source of data to be added to the cloud’s intelligence and analyzed in aggregate for insights before, during and after approval. Technologies on the horizon also could play a huge role in health sciences in the near future. Blockchain, for example, offers an entirely different approach to security and validation of data. It could make it possible for clinical trial data to be shared more efficiently and much sooner, with partners and regulators alike, offering the potential to reduce the time needed to process and evaluate trial results. That could shave months or even years off the time it takes a new treatment to reach patients. The same technology has the potential to change the business model in the industry as well, by making it possible for the patient to maintain ownership and control of their data. Forward thinking companies – and ones who understand the importance of being patient-centric – are already starting to think about how all these technologies will drive their digital transformations and deliver business value as well. Future OIC conferences will continue to be important meeting places for the sharing of innovative ideas and networking among global peers who are advancing the industry for the benefit of patients.      

Conference by and for professionals emphasizes patient-centered approach for industry Digital transformation is sweeping the health sciences industry, enabling companies to accelerate the pace of...

Health Sciences

Social Media Data Mining for Clinical Trial Safety Insight - Part 1 in a Series

Today we have defined Big Data as encompassing the parameters of social media, that deluge of tweets, Facebook messages, forum posts, videos/vlogs, blogs, with the ever fine tuning of modalities, Pinterest, Snapchat, and Instagram posts.  However, while we now know where to get the information, we still do not have a clear, defined set of rules on what to do with the information in a pharmacovigilance, and broader, life sciences context. Early Social Media for Life Sciences In 2007, I was eager to present at a newly christened, “Social Media Summit” and several similar conferences. We were all at the infancy of this new media. I had begun to champion this new media, as it was labeled, as another repository for life sciences outreach and understanding of trends, if not meaningful data. The digital life sciences community was still skeptical of this information, citing lack of guidance on how to deal with it, and a general disappointment in the statistical strength of insights to be gleaned.  Some of the presentations I led with at the time carried shiny, attention-grabbing titles such as, “Energizing the Drug Safety Practice with the Use of New Media” and “Mitigating the Current Drug Safety Spotlight via New Media/Social Media Initiatives” or the even the later pharmacovigilance branded presentations entitled, “Casting the Net on Social Media: Pharmacovigilance Mining of the Web”. Social Media's Role in Today's Clinical Research Much has changed in this past decade dealing with social media. First, we know now that we cannot ignore it. Doing so just procrastinates the need for a good internal policy to incorporate its strengths into our common lexical resources. Second, we now realize that it is not simply a matter of putting resources at the forefront. No, it is much more involved.  It requires assistance from Bots, Natural Language Processing (NLP) toolkits, and general machine learning approaches to gather the relevant articles of importance from the growing amount  of social media data.  Mining social media is the topic du jour, and there are plenty of approaches on how to conduct this activity. Cast a search across the Internet on social media mining in life sciences or the narrower, pharmacovigilance use of social media, and you will obtain plenty of results. All these social media results are based on something that should be part of a pharmaceutical organizational game plan – a way to interact with its constituent, the general consumer. The past decade has provided many examples of “good customer engagement” advice that was ignored, or inadvertently marginalized, and the reputation of the company took a serious media hit. So, short of ignoring this medium, what actions should a growing pharmacovigilance department take with regard to social media mining, and how much importance (aka resources) should be placed in this activity? The answer is not an absolute. It is very interdependent on which of the following areas means the most to the organization: regulatory obligations/compliance, adverse event/misinformation engagement, and market awareness/brand strength.  

Today we have defined Big Data as encompassing the parameters of social media, that deluge of tweets, Facebook messages, forum posts, videos/vlogs, blogs, with the ever fine tuning of modalities,...

Life Sciences

SCOPE 2018 ReCap - Tackling Some of the Most Pressing Issues in Clinical Development

From February 12th to the 15th, more than 1,500 industry professionals from all over the country and the world gathered in Orlando, Florida, to attend this year’s Summit for Clinical Ops Executives (SCOPE).  Leaders from pharma, biotech, clinical research organizations (CROs) and technology partners  joined together to discuss the most pressing topics impacting the current state of clinical development. There were a few common themes that kept bubbling to the top throughout the various sessions. These included: innovation, patient centricity, real world data/real world evidence, wearable devices/mHealth, Blockchain, and AI/machine learning.  The common denominator across all of these increasingly relevant topics was the challenge of how to incorporate each piece into the context of the clinical development lifecycle. In fact, the SCOPE conference structure was organized around tracks that reflected the lifecycle, and these topics were “sprinkled” in that context among those tracks. Wearable Devices, the Ever-Growing Technology Category It was interesting to hear the pros and cons of incorporating, wearable devices -- a prevalent theme across many tracks  --  into trials and their impact on the patient, investigator sites, sponsor/ CROs, and the trial itself.  On the positive side, wearable devices provide a wealth of patient data and provide a convenient way for patients to participate in a trial. But on the other hand, they introduce a new set of challenges  including:   How many mobile devices (an iPad, iPhone, their own mobile device) does one patient need to maintain to make a wearable devices work? How costly and resource-intensive is it for the sponsor company to supply such devices and collect the data? How do sites manage all of these devices and mobile apps? How can the data collected be disseminated and used effectively?   The discussions showed that this is the direction clinical trials are moving.  However, it was also noted that devices alone won’t be the future of clinical development. Devices will be used in addition to traditional methods of patient data collection and will provide additional, valuable insight. Real World Data and Real World Evidence In addition to the increasing volume of data from mHealth devices, there was significant discussion around the incorporation of real world evidence and real world data into the clinical development process, and how these additional data streams can help facilitate the advanced use of AI and machine learning in the clinical trial lifecycle.  It’s an exciting time because there is now enough data out there to leverage this type of technology.  But, there are still questions around how to incorporate real world evidence/data into the trial process. As the convergence of clinical and real world data becomes more of a reality, AI and machine learning will take on a bigger role. Patient Centricity At the core of most discussions was the vital importance of patient centricity.  As technology advances, data sources grow, and processes change, ensuring that the human element of the clinical trial process is preserved. This perspective rang loud and clear.  During one session, there was a discussion about not just using technology for its own  sake (i.e. mobile devices and mHealth devices), but that the patient experience must be considered before adopting new technologies too quickly.  Admittedly, some sponsor company representatives said they were quick to use mobile devices before thinking about the patient/user experience and that adoption of the devices was low or that patients opted out of their use.  The bottom line was clear. The patient must be at the center of the plan, and technology is only great, if it is used. Blockchain The Blockchain topic was also related directly to the patient centricity theme.  What if patients can collect their own EMR & EHR data from their healthcare providers?  In essence, the patient would be curating his/her own healthcare data  (see February 2018 Life Sciences newsletter for Apple Health App). Then they would opt-in to a clinical research Blockchain in which their data would be pushed and de-identified.  The research community s managing that Blockchain, would perform many types of analysis against all the patients in that Blockchain.  Analyses such as: protocol feasibility, patient recruitment, outcome for payer reimbursement, needs assessments, and many other real world data use cases can be driven successfully from such a treasure trove.  A key question is, why would patients opt-in to the Blockchain? While there are many motivational approaches, at the core it will likely involve some type of payment.  What if a patient could submit his/her healthcare data, and the data could be analyzed for its “value” to the clinical research consortium before the patient commits?  What if the amount a patient will be paid could be a function of his/her agreement to provide the data over various periods of time?  There are many interesting approaches that can be taken here.  Something to remember is that the average cost of recruiting a patient for a Phase 2 or 3 Clinical Trial is approximately $40,000.  So, there is money out there that can be used to motivate participation. Scope  2018, a  Great Platform for Learning There are very exciting things happening across the clinical development lifecycle, and the future holds the possibility of bringing life-changing drugs/devices to market faster and more effectively.  But, with these exciting developments comes some very difficult challenges to overcome.  The good news is that the industry leverages events like SCOPE to share ideas, best practices, key learnings, and thoughts on the future of clinical development. As a result, everyone benefits from these valuable discussions.  Overall, Scope 2018 enabled some very dynamic and interesting discussions. The entire Oracle Life Sciences team looks forward to seeing what the future holds at next year’s conference. Greg Jones is Enterprise Strategy Architect, Life Sciences, for Oracle Health Sciences. ​

From February 12th to the 15th, more than 1,500 industry professionals from all over the country and the world gathered in Orlando, Florida, to attend this year’s Summit for Clinical Ops Executives...

Health Sciences

NYC Biotech - On the Leading Edge of the Health Sciences Transformation

We’re living at a truly remarkable time in the health sciences. Our technology has made it possible for us to  see individual molecules before and after reactions, gather, manage, and analyze health-related data at lightning speed, and share it all instantly with worldwide teams via the cloud.  And, these tech advantages, in turn, fuel greater insight into disease, new research data, and actionable diagnoses to help save more lives. From my position in the health sciences, I sometimes feel like I’m on the leading edge of a data wave traveling at light speed. Biotech incubator programs in and around New York City are helping to ignite and drive this transformational wave. These include:  New York University’s Stern Creative Destructive Lab (CDL), Johnson & Johnson’s JLabs@NYC and  the Center for Biotechnology at Stony Brook University  The ideas and technologies generated within these scientific birth places may change the way the new drugs are found, healthcare systems operate, and even what is considered the pace of finding cures. NYU’s Stern Creative Destructive Lab In partnership with the University of Toronto’s Rotman School of Management, the CDL is helping early-stage science startups push the current boundaries of medical science. Some of these startups include: Atomwise, using artificial intelligence for drug discovery,   Deep Genomics, creating technology that predicts cell activity, and Bridge Genomics, providing insight on genetic transformations. Atomwise’s Atomnet uses artificial intelligence to help discover drugs. AtomNet is the first deep convolutional neural network for structure-based, “rational” drug design. It helps to design a new drug molecule by using an algorithm to analyze and mimic structures within a targeted disease molecule. This type of drug discovery has the potential to reduce drug discovery, timing, and costs (currently averaging $2.5 billion and up to 15 years), dramatically. For the past two years, Atomnet has been exploring issues in cancer, neurological diseases, antivirals, and antiparasitics.  Antibiotics molecules predicted by AtomNet have generated positive results in animal studies. Deep Genomics, with expertise in machine learning, genome biology, and precision medicine, is inventing new computational technologies that can predict a cell’s activity when its DNA is altered by genetic variation, whether natural or therapeutic. Bridge Genomics is pioneering a new standard for medical diagnoses with its new technology, BridGE.   BridGE searches for and identifies disease-specific genetic interactions that link large sets of human genes. This technology could help to identify how combinations of different mutations in genes make people more susceptible to certain diseases and other inherited traits. The goal of Bridge Genomics is to improve diagnostics and therapeutics via genetic data analysis. JLabs@NYC Johnson & Johnson Innovation LLC, in collaboration with New York State and the New York Genome Center, is currently launching JLabs@ NYC, for biotech, pharmaceutical, medical device and consumer health companies. One of the first companies to join the incubator is Certadose, maker of a syringe designed for children to ensure correct drug dosages in emergency situations. Given that in the US, approximately 7,000 children die and 140,000 children are injured each year due to dosing errors , Certadose uses Broselow® color system, basing dosage on a child’s height and weight,  to calculate the effective but not harmful amount of medication to be given to a child in crisis. Certadose’s FDA approved syringe has been proven to reduce critical errors to ZERO. Center for Biotechnology at Stony Brook At the center for Biotechnology at Stony Brook, Cellmatix , a next-generation women’s health company, has developed Polaris which uses big data -- including age, DNA information, body mass index, sperm parameters, and hormone metrics of individual couples -- to help predict if they will be able to conceive. Described by Forbes as a tool that helps more women to get pregnant, Polaris gives individual couples new hope about having children. In the US, one in eight couples has difficulty conceiving a child. Cellmatix also has created Fertilome which tests for specific genes that might work against a woman’s ability to conceive.  It’s now also working with 23andMe to identify related, genetic reasons why a woman is having trouble creating a child. All in all, New York is proving its relevance and importance in this transformational age of gene therapy, biotech advances, and precision medicine. It’s an exciting time to be in biotech! This is  the third installment in our series celebrating New York Life Science Innovation.  This series is in anticipation of our exclusive, invitation-only, executive conference, Oracle Industry Connect, bringing together industry leaders to share deep domain expertise, insights, and industry-specific best practices. Oracle Industry Connect takes place at the New York Hilton Midtown, April 10-11, 2018. 

We’re living at a truly remarkable time in the health sciences. Our technology has made it possible for us to  see individual molecules before and after reactions, gather, manage, and analyze...

Life Sciences

Bitcoin’s Blockchain Technology and Trusted Clinical Trials

Bitcoin, cryptocurrency and worldwide payment system, is currently the darling of the investment community and is trading at extremely high levels. One driver for this trend is the promise of Bitcoin’s underlying technology, Blockchain.  Blockchain’s technology is beginning to be cited in use cases beyond those just for Bitcoin currency. Blockchain addresses the problem of shared trust in a cryptographic manner.  It allows users who don’t necessarily trust each other to interact securely and with full trust.  Today in the financial transaction world, this trust management issue is managed by organizations like VISA, MasterCard, SWIFT, etc.  Blockchain removes the need for these types of intermediary organizations through a distributed ledger concept.  How it Works With Blockchain, multiple entities have a copy of the distributed ledger, and the technology keeps the ledger in sync across those entities.  Specific contracts are set up in the distributed ledger that defines the transaction rules. These rules must be followed consistently by each entity using the distributed ledger whenever a new transaction is conducted.  This adherence to specific contract rules within the distributed ledger is what enables Blockchain to provide full trust without an intermediate entity.  However, like every other technology out there, Blockchain is not perfect.   One of its biggest issues is with settlement delays.  As mentioned earlier, the ledger is distributed.  That means that before a transaction can clear, each node on the network must process the transaction.  As more nodes are added to the network, more latency across the nodes is introduced. This can cause the time it takes to clear transactions to grow substantially! Blockchain and Clinical Trials  Blockchain technology has real potential to support clinical research by preventing fraud in scientific literature and regulatory submissions. In addition, Oracle Health Sciences is carefully watching another potential key use case, leveraging the technology to manage supplies in clinical trials across Sponsor, CRO, and trial site organizations.  This use case specifically complements Oracle’s new, industry-ground-breaking trial randomization and supply management (RTMS) solution, Clinical One Randomization and Supplies Management Cloud Service, a part of our Clinical One Platform. Today, supplies management for clinical trials is managed by a Sponsor-controlled team from a single, centralized Sponsor –controlled database controlled by the sponsor throughout the Clinical Trial, per regulatory requirements.  The database tracks supplies provided to trial sites and restocks the sites based on patient visit schedules during the trial.  At trial conclusion, the database also tracks the recovery of any unused supplies, and ultimately, their destruction or other planned end. It’s possible that Blockchain technology can substantially reduce the amount of people power needed to do all this work. Fortunately, there is a relatively new Blockchain cloud service offering in the Oracle Platform as a Service Suite (PaaS).  The core value proposition for Oracle PaaS, based on the Hyperledger Fabric implementation, is ease of spinning up a full environment with just a credit card swipe for a pay as one goes arrangement with oracle to manage the Blockchain environment on an ongoing basis.  Oracle PaaS provides full elasticity support, plus some investment by Oracle around the core Hyperledger Fabric implementation. This allows overall ease of administration and manageability of the environment, as well as optimizations of the implementation. Oracle PaaS also includes a RESTful Service Layer wrapper, the critical area of scaling consensus to reduce backlog for ease of integration to enterprise applications, etc.  Imagine a large Sponsor organization spinning up a Blockchain-based system for its CTMS business processes at its thousands of clinical research sites.  Now imagine having Oracle provide the technology infrastructure to build the system and the ongoing maintenance, upgrades, and security patching for the lifetime of the system. Now, finally imagine all this integrated with Oracle Clinical One Randomization and Supplies Management Cloud Service for full CTMS business process support. We’ll continue to watch this exciting Blockchain movement and provide updates as the trend advances.  

Bitcoin, cryptocurrency and worldwide payment system, is currently the darling of the investment community and is trading at extremely high levels. One driver for this trend is the promise...

Life Sciences

An Update on Programmatic Access to CDISC Standards

During the past five years, I‘ve been intimately involved with Shared Health and Research Electronic library (SHARE), a set of technologies the Clinical Data Interchange Standards Consortium (CDISC) uses for developing and managing industry clinical data standards. Part of SHARE is a metadata repository that has the standards broken down into bits of metadata. Access to the SHARE repository is limited.  But, the CDISC publishes the standards freely on its download site in a number of formats, including XML and RDF. In late November 2017, while attending the annual CDISC International Interchange in Austin, I was invited to participate in a SHARE API Design workshop. Over the last couple of years, the CDISC has been working on providing programmatic access to the standards to help industry teams implement them more efficiently and to support business process automation. Thus far, the CDISC has only provided access to its API v1.0 through a paid Early Adopter Program. However, in the CDISC’s recently released 4th quarter newsletter, President and CEO David Bobbitt announced that, starting this month, the CDISC will provide access to the SHARE API v1.0 at no additional cost to its Platinum Members to increase adoption and obtain more feedback for its next iteration, SHARE API v2.0. This is an exciting step for the CDISC and for the industry. But, what does this mean for Oracle? As with most of the industry, Oracle has typically downloaded CDISC standards, then interpreted and implemented them in a number of ways. Oracle uses Clinical Data Acquisition Standards Harmonization (CDASH) standards in Central Designer libraries, as well as CDASH and Study Data Tabulation Model (SDTM) standards as models in Oracle Health Sciences Data Management Workbench (DMW). Downloading and implementing the standards can lead to inconsistent interpretation and inefficient processing, due to manual processes. Each time a new version of the standards is released, the manual processes are not only repeated, but become even more complex.  This is due to the downstream impact (e.g., usage across many studies, downstream mappings from CDASH-based metadata to SDTM, etc.). Having programmatic access to the standards can increase consistency and reduce the effort it takes to maintain these libraries and models across all Oracle Health Sciences customers. I hope to have updates on our progress in future articles. In the meantime, if you have questions regarding our involvement with CDISC or usage of the standards, please feel free to reach out to me at  julie.s.smiley@oracle.com.

During the past five years, I‘ve been intimately involved with Shared Health and Research Electronic library (SHARE), a set of technologies the Clinical Data Interchange Standards Consortium (CDISC)...

Life Sciences

Electronic Regulatory and Study Binders

Historically, clinical trial sites have maintained very large paper study binders containing all the regulatory documents they must maintain for a trial. These paper binders contain documents that must be submitted to sponsors/CROs during study start-up and periodically reviewed by clinical research associates (CRAs). Having paper copies of these binders has meant that sites often are burdened to scan in documents, or email document copies to a sponsor or CRA. Maintaining these paper binders also requires that CRAs review the binder documents manually and only when they are at their site locations. This lack of flexibility makes the situation difficult for CRAs. The move toward eRegulatory binders holds the potential for significant improvements in the study start-up process, for both investigator sites, and for sponsor/CRAs.  For sites, use of an electronic regulatory binder allows clinical teams to create their own templates, and to provide access to sponsors and CRAs, remotely. For CRAs and study start-up teams, this access means reduction of document exchange (via mail, email, or scan) and provides real-time access to required documents for study start-up and ongoing review. The resting place for documents contained in the investigator site regulatory binder documents folder is ultimately an Electronic Trial Master File (eTMF) maintained by sponsors or CROs on behalf of the sponsors. Surprisingly, from all the vendors surveyed, none of them had considered ANY integration capabilities to move documents to another system such as an eTMF system. The eRegulatory binder tools did not include APIs or other mechanisms to extract the data. This is a key missing area that would provide even more efficiency and business automation for sponsors and CROs. Imagine a site team able to create its own document templates, make those templates accessible to CRAs, and provide CRAs with ability to send those documents to an eTMF system automatically, with no scanning, emails, or exchange of documents. This could significantly decrease the time it takes to get sites approved for study starts  While the eRegulatory vendors do not seem to be thinking this way yet; this is the type of innovation the industry should strive towards with our new Oracle Health Sciences Clinical One eClinical platform. This effort would enhance both the user experience and the business process, providing greater efficiency gains to our customers.

Historically, clinical trial sites have maintained very large paper study binders containing all the regulatory documents they must maintain for a trial. These paper binders contain documents that...

Health Sciences

eSource from the Site Team Perspective

Recently, I had the opportunity to host an eSource session at Society for Clinical Research Sites (SCRS) Global Summit.  Unlike most industry events, the Summit is focused solely on investigator sites. The majority of attendees are from investigator sites, pharmaceutical companies, contract research organizations, and other industry vendor groups.  There are no formal presentations during breakout sessions.  Instead, SCRS promotes open dialogue among all attendees.  There were close to 30 site representatives participating in the eSource discussion. From discussion, it was clear that though the topic was not new, and though many vendors were venturing into the space, eSource was still new to many investigator sites.  Of the 30 sites represented in the session, only five had any type of eSource system.  Some key takeaways included: Site teams are struggling to understand what eSource is, and if it requires investment.  Many questioned how eSource differed from an electronic medical record (EMR) system.  They did not understand why they just couldn’t update or modify their EMR systems to capture clinical study data (questionnaires, etc.) that was not part of their EMR. They also wondered that if eSource systems were used, how then, would EMR data be viewable in the eSource system without re-entry requirements.  They also worried about the necessary integration and workflow processes that would be required between EMR and eSource, and eSource and EDC systems.  Notably, some eSource vendors in the session indicated that EMR systems might not meet key requirements, such as Part 11 compliance, and should be investigated. Site teams strongly prefer to select and implement their own eSource system.   They do not want sponsors to dictate which eSource applications to use.  They are concerned that if sponsors were to select an eSource system that is much like an EDC system, then each site would have up with 10 to 15 eSource systems to learn, manage, and maintain. Site teams want to provide input on requirements for eSource application providers.  Many expressed concern over requirements that may not have been considered in current eSource systems.  These requirements include: mobile enablement, edit checks, ability to note/flag items for Principal Investigator review, the ability to look at data over time and, importantly, the need for data entry offline, in the event of an internet failure during a subject visit.  Do you use an eSource system?  What has been your experience?  Let us know.

Recently, I had the opportunity to host an eSource session at Society for Clinical Research Sites (SCRS) Global Summit.  Unlike most industry events, the Summit is focused solely on investigator...

Health Sciences

Semantics in a Syntactic World

In the world of technology, syntax has long had a clear purpose and wide understanding. Semantics, on the other hand, is not as commonly understood. In this article, I will illustrate the differences between syntax and semantics, and how they apply in terms of technology. Syntax represents the structure of how statements are formed. This applies to both human and programming languages. In human language, syntax is the grammatical structure of a sentence.  Whereas in programming language, syntax is the set of rules and structure for forming statements, such as queries. Semantics is the meaning of a statement, versus the structure of it. But, semantics also take context into account, because the meaning of a statement can change or be inferred from the context.  From a human language perspective, let’s look at this quote from Groucho Marx. In both statements, the syntax is fine. These are grammatically correct sentences. However, if we take each word to mean the exact same thing, regardless of context, then bananas have wings or there are time-telling flies that have an affinity for arrows. This example illustrates how we interpret meaning based on the context. The same can be said for how we use technology. But often, we focus only on syntax, resulting in  hard-coded questions and limited results. Let’s take a look at an example of how these concepts apply to technology.  If we want to find all patients in a study who had a lower extremity fracture, we would run a query against the data to find any adverse events coded to “lower extremity fracture”.  (The coding process has already taken into account differences in spelling and how the condition was reported.) However, this approach can still limit results because it may leave out knee fracture, ankle fracture, hip fracture, etc. If we don’t explicitly query for these, then we will only find a subset of the data that meets the intended need. But, if we add a semantic layer (e.g., ontology) and run a semantic-based query (e.g., SPARQL), the technology could infer that knee, ankle, and hip fractures are all types of lower extremity fractures, and therefore, return all results that meet the criteria. This is just one, small example of how semantics can be used to extend our ability to get to the correct insights. While syntax is a well-understood concept within the technology world, semantics will have an emerging and significant role moving forward. Have you used semantic-based querying to clarify your clinical data searches?  Let us know. ###  

In the world of technology, syntax has long had a clear purpose and wide understanding. Semantics, on the other hand, is not as commonly understood. In this article, I will illustrate the differences...

Health Sciences

New Tools for Health Sciences That Can Transform the Industry – Part 2

A variety of digitized data tools is currently enabling health professionals to utilize technology to assist in the management of routine activities. When leveraged, these tools can elevate a healthcare organization from one operating at an industry-best level to one that performs at a transformational pace. Part 1 of this series began to describe data automation as one of several new tools available to healthcare professionals to raise the standards of efficiency within their organizations. Here, Part 2 explores data mining tools and intelligent bots. Data Mining and the FDA Data Mining Council For healthcare and life sciences domains, the need for real-time data mining is crucial, though presently it is not a fully automated task. If technology could be upgraded along with the near elimination of labor resources, the health sciences would have an automated data mining system. Such a system is in its early stages at the Food and Drug Administration (FDA). The FDA Data Mining Council (DMC) was formed in 2007.  A collaborative group, the DMC explores methods and best practices recommended by experts from other federal agencies, industry, and academia, all of whom have analogous experience in knowledge discovery through various data mining approaches. The Council serves as a forum for FDA scientists to share their experiences and their challenges in analyzing data contained in the vast FDA databases that the FDA maintains, as well as a setting to discuss new methods for such analyses. The FDA currently receives approximately two million adverse events, use error, and product complaint reports each year from consumers, health care professionals, manufacturers, and others.  Since the early 1990s, the FDA has advocated data mining to the industry in an effort to gain a better understanding of the signals within the safety data. Now, FDA data mining experts have expanded their attention to add more sophisticated data mining methods and apply data mining to other types of product- safety-related FDA and non-FDA databases.  The Proportional Reporting Ratio The Proportional Reporting Ratio (PRR) is the foundational concept for many disproportionality methods.  However, because this method does not adjust for small, observed or expected numbers of product-event interest pair reports, other, more advanced statistical methods are employed.  These methods include the Multi-Item Gamma Poisson Shrinker (MGPS), which produces Empirical Bayesian Geometric Mean (EGBM) scores. Several FDA centers including CDER, CBER, and CFSAN, use the MGPS algorithm for their routine surveillance activities. Various commercially available software programs generate PRR and/or EBGM scores. CDER has applied Empirica Study™ to analyze drug clinical trial data in either new drug applications or supplemental applications. Oracle Health Sciences Empirica Study™ interfaces with data that conforms to the standardized Study Data Tabulation Model (SDTM) of the Clinical Data Interchange Standards Consortium (CDISC) data standards to create a wide set of automatically- generated, analytical outputs and tailor-made, reusable tables and graphs. These outputs have helped reviewers to provide more efficient analyses of potential safety issues in the clinical trial data of drugs approved by the FDA.  The FDA notes the benefits of data mining in the areas of standard processes (because data mining is automated, the outputs are statistically objective and devoid of manual analyses), simultaneous analysis (across an entire database at once), efficiency (analyses computed in minutes), and automated signal investigations (transparency with audit trails, drill-down capability, observation of signals over time, and study of a product in populations). These tools are in the early stages of the automation efforts at the FDA. However, the FDA operates in a regulated environment. Several pharmaceutical companies are now experimenting with automation and Artificial intelligence tools, such as bots, which are outside the purview of current regulations.  Chatbots, or bots as they are known, are assisting with routine tasks, such as getting weather updates, flight messages, and booking hotels. In addition, they are now, answering simple health inquiries from physicians. A First Layer of AI for Healthcare –the MSD Salute Bot Merck-Sharpe-Dome (MSD) Italy has launched the MSD Salute Bot in 2016.  Its first focus has been the Immuno-Oncology. Soon to come, there will be many more areas. The Bot is running on Facebook Messenger. From the MSD prospective, physicians are digital consumers looking for relevant information for their professional activity. Some key factors, like the increase of media availability, mobile devices penetration, and the decrease lessening of the time spent on search and analysis available, are resulting in a reduction of time spent navigating/searching on the web. Only available to registered physicians at this time, the Bot has interacted with hundreds during the 2016 winter holidays. Bots are the first layer of accessible and deployable AI that encompasses natural language processing (NLP) on the general lexicon. Anyone can deploy a Bot via online free tools and building kits. As of now, Bots, though available for the outreach activity, do not decipher complicated medical lexicons well. Currently, the Bot, no matter how strong the AI or NLP element, cannot replace the human interaction for in-depth conversations on health, disease diagnosis, or exploratory conversations with a health professional. The Bot’s aim is different. That is, to relieve the health professional from answering the routine questions. Bots are best employed when used for social media outreach activities. They fare extremely well in instances requiring a customer service orientation.  Yet, technology is advancing and these tools will as well. New processing power in chip technology, coupled with advances in algorithmic analysis will yield better tools for the healthcare and life sciences industries. AI, NLP, Bots, automation, machine learning, and data mining are not new.  They have been used in other industries such as manufacturing, customer service, and finance.  Now progress in these other industries and the elimination of the requirement for extensive resources has caught on with the health sciences.  As new adopters of these technological advances share their experiences, the entire health and life sciences industry can benefit with access to a new set of transformative tools.   https://www.forbes.com/sites/gilpress/2016/11/01/forrester-predicts-investment-in-artificial-intelligence-will-grow-300-in-2017/#3350400d5509  Accessed 29Oct2017 [1] Data Mining at FDA, Hesha J. Duggirala. 2016 Accessed 29Oct2017 [1] Oracle Empirica™ Manual 2015   [1] http://www.impossibleminds.com/portfolio-item/msd/  Accessed 30Oct2017   Dr. Sameer Thapar is Global Pharmacovigilance Director, Oracle Health Sciences Consulting Linkedin.com/in/Thapar

A variety of digitized data tools is currently enabling health professionals to utilize technology to assist in the management of routine activities. When leveraged, these tools can elevate...

Health Sciences

New Tools for Health Sciences That Can Transform the Industry – Part 1

Many previously labor intensive functions in healthcare and life sciences are now becoming commoditized with the aid of technology. Digitization of data elements is allowing health professionals to utilize technology to assist in the management of activities. Deploying a database is the first step to raise the level of efficiency that a healthcare organization experiences from baseline. Most solutions today have the means to raise the standards of efficiency within the organization. This is done when workflow-best-practices-and-automation-configurations are implemented in line with the deployment. The result of these implementations is the transformation of the organization beyond the realm of industry best practices. There are new tools available to healthcare professionals to overcome the increasing demands that resources and budgets place on them in the management of routine activities, such as automation implementations, use of intelligent bots, and data mining tools. These are just few of the more commonly deployed activities from the list of known technology. All these tools can immediately be leveraged to elevate the organization from one operating at an industry-best level to one that performs at a transformational pace. Data Automation- This Year’s Buzzword Phrase Automation is this year’s buzzword, but in-depth understanding of the components that make up automated processes is not known well. Automation is simply applying rule-based algorithms to tasks.  If a task is repetitive and non-changing, then it is a good candidate for automation.  Automation is an excellent modality for data mining activities. Big data presents the challenge of pertinent elements discovery. Recently, big data mining has been able to advance using automation.  However, previously it has been a laborious task to search through the records using various algorithms at each pass-through. Automation is sometimes used interchangeably with artificial intelligence (AI), and this is not accurate.  As mentioned, automation is rule-based; whereas AI is machine learning.  AI enables the computer, database, or machine to generate, process and deploy algorithms as it sees fit.  The machine is “fed” data and “learns” from it.  It then applies this learning to other similar tasks. If it experiences an outlier, a piece of anomalous data, it deploys a best fit for that; therefore, the AI capability exercises an “intelligence”.  According to Forrester Research[i], investment in AI is expected to triple this year. Tech giants such as Google, Microsoft, and Amazon are already tapping the potential of machine learning with inventions like Home, Cortana, and Alexa. In order for AI to operate well, it needs to mine data at an exponential rate. Large data sets in the trillions of bytes have to pass through the AI machine for it to grasp and formulate its rules and algorithms. For healthcare and life sciences domains, the need for real-time data mining is crucial. It is not advantageous in these domains to shift through data elements as a backlog activity. Actionable insight is always on elements being generated in the present.  However, this kind of activity has been constrained by technology and labor requirements. But, if technology could be upgraded along with the near elimination of labor resources, the helath sciences would have an automated data mining system. Such a system is in its early stages at the Food and Drug Administration (FDA).   Read Part 2 on data mining, the FDA Data Mining Council and more. ************** https://www.forbes.com/sites/gilpress/2016/11/01/forrester-predicts-investment-in-artificial-intelligence-will-grow-300-in-2017/#3350400d5509  Accessed 29Oct2017 [1] Data Mining at FDA, Hesha J. Duggirala. 2016 Accessed 29Oct2017 [1] Oracle Empirica™ Manual 2015   [1] http://www.impossibleminds.com/portfolio-item/msd/  Accessed 30Oct2017   Dr. Sameer Thapar is Global Pharmacovigilance Director, Oracle Health Sciences Consulting Linkedin.com/in/Thapar    

Many previously labor intensive functions in healthcare and life sciences are now becoming commoditized with the aid of technology. Digitization of data elements is allowing health professionals...

Health Sciences

Galen to Prix Galien -- from Rome to New York in Two Millennia

Next week, the Galien Foundation hosts the Prix Galien awards at The American Museum of Natural History in Manhattan. The gala event  --  including the ceremony, cocktail party, and dinner  --  recognizes outstanding achievements in pharmaceuticals, biotechnology, and medical technology that improve the human condition. The award is considered the equivalent of the Nobel Prize in life sciences research. Ever wonder how Prix Galien got its name? The award honors Claudius Galenus, an anatomist, physiologist, clinician and researcher. He has been called the father of medical science and modern pharmacology. His work has been considered a reference for over two millenniums. Born in Pergamos in 131 A.D., Galen studied in Smyrna, Corinth and Alexandria, the three centers of medical excellence of the ancient world. According to a legend, Galen dreamt of Aesclapius, the god of medicine in ancient Greek mythology, and this dream inspired the rest of his life.  When he turned 17, Galen worked as a physician at a gladiators’ training center. Marcus Aurelius requested Galen to come to Rome when he was 37. There, he grew in reputation and stature as a healer, teacher, researcher and writer. His ideas on the functioning of the human body were so well received that he became the personal doctor of young Commodus, the Emperor’s heir. During his long, eminent life, Galen completed over 500 works on anatomy, physiology, pathology, medical theory/practice, and forms of therapy. His work formed the basis of Galenism, a medical philosophy that dominated medical thinking until the Renaissance. He also  travelled throughout the world, studying local plants and remedies. He described 473 original drugs and many mineral and plant based substances. He was the first scientist to codify the art of preparing active drugs. His observations, logic, and deduction made him the true successor to Hippocrates and his view that the prime aim of medicine is patient care has formed the very cornerstone of modern pharmacy.  Galen died in 201 AD. Oracle is honored to sponsor an award dedicated to such a remarkable medical pioneer and very proud to celebrate, this year’s Prix Galien nominees for their innovation in science and medicine. Join us in New York on October 26 for the Galien Forum, a day-long event where Nobel laureates and scientists discuss pressing health issues and scientific breakthroughs. And, in the evening, honor the life science innovators of today at Prix Galien.

Next week, the Galien Foundation hosts the Prix Galien awards at The American Museum of Natural History in Manhattan. The gala event  --  including the ceremony, cocktail party, and dinner  -- ...

Health Sciences

Data Models & Their Importance for Clinical Data Management

A data model is quite simply a 'description of the structure' of stored data.  For example, a clinical trial in Oracle Health Sciences InForm is described by an InForm Clinical Trial Data Model.  Data models are created by applications to store data and accessed by end users to manipulate data.   Data models are important because they help end users to access data that is easily understandable, meaningful, and useable.  For example, an Oracle Health Sciences InForm Clinical Trial Data Model is easily understandable to a study manager.  But, it is not immediately useable for submission to the FDA because it is not in a submission ready structure.  The FDA requires clinical data to be provided as a "Study Data Tabulation Model” (SDTM) dataset. So the business need is for end users to easily create data models that can be used: To capture data (ELECTRONIC DATA CAPTURE data models) To review data (REVIEW data models) To submit data to regulatory authorities (SDTMs) To visualize data (ANALYTICS data models) For any other purpose Rather like the alchemists of old who were trying to convert lead into gold, Clinical R&D is currently challenged with slow, complex, manual processes that must transform raw clinical data sets into high value clinical data models and datasets. What This Means for Sponsors and CROs: Sponsors and CROs need to take raw clinical data and provide it to their internal stakeholders in real time to: Fix data collection and quality issues Identify clinical safety issues Monitor trial progress Support interim analysis However, many data managers work with multiple systems, with many CROs, and many internal clinical groups.  Often they use processes that are disconnected, fragmented, expensive, and complicated to manage. How Oracle Health Sciences Data Management Workbench (DMW) Can Help Oracle DMW is the only industry solution that provides a unified platform in which data models can easily be created, stored, re-used, and transformed from one model to another. The value in this is that end users can easily connect to a data model to access and use clinical data in real time. This month at the Society of Clinical Data Management Annual Conference (SCDM), sponsors and CROs will be looking for ways to enable the transparency and analytic capabilities of the data within the clinical R&D process.  They can meet Oracle there (Booth #310) to discuss: The ease of creating clinical data models  The ease of transforming clinical data models from one to another The ease of connecting to a data model to visualize data The ease of automating clinical data flow to provide end users access to more accurate, higher quality data faster, and at lower cost Oracle Health Sciences breaks down barriers and opens new pathways to unify people and processes to help bring new drugs to market faster. Join us at SCDM, Booth #310, for a demonstration of Oracle Health Sciences Data Management Warehouse to add value to your patient-centric trials. ### Srinivas Karri is Director of Life Sciences Product Strategy for Oracle Health Sciences.

A data model is quite simply a 'description of the structure' of stored data.  For example, a clinical trial in Oracle Health Sciences InForm is described by an InForm Clinical Trial Data Model.  Data...

Health Sciences

Investigator Payments- Where’s My Money?

Did you know that issues over payments are one of the top complaints of investigative research sites working in clinical research?  Many investigator sites say that receiving payments and reimbursements are two of the most challenging processes in conducting clinical research.  Many sites often are not reimbursed in a timely manner, which can negatively impact the cash flow status of a site.  Additionally, many research sites, outside of academic research institutes, do not use a system for accounting. Rather they rely on spreadsheets or manual processes for reimbursements.  From a technology perspective, Oracle’s Siebel Clinical Trial Management System (CTMS) has supported the processing of payments to investigators for many years. But even with the background support of a full CRM system, this is still one of the most heavily configured components of a CTMS system.  One might ask, if a subject visit occurred, then payment should be sent.  What is so complex?  A single investigator payment could contain multiple procedures, which would need to be confirmed by a clinical research assistant (CRA) prior to payment.  The payment also could have a withholding amount as per the agreed site contract.  For certain countries, value added taxes (VAT) need to be calculated, as well as exchange rates, and possibly split payments to different areas of the investigator site or hospital.  Consider the idea of trying to process all this -- ensuring accurate payment by the pharma/CRO, reconciling by the investigator site, resolving any errors, and most importantly for the investigator site, maintaining a positive cash flow– multiplied by all patients in across all studies at a site.  ….One can understand why this is challenging and frustrating on both sides. An Example of a Payment Process Timeline                 Oracle sees customers move from generating payments through a CTMS system, to generating payments based primarily on events that occur in EDC, These events can include a subject visit, a specific visit procedure with minimal payments coming from CTMS for items such as site start-up grants or milestone based payments.  Oracle is working with third party payment providers to ensure integration capabilities through standard Oracle Health Sciences processes (Oracle Health Sciences InForm Publisher, CTMS APIs) enabling the automatic exchange of information to the payments systems.  Utilizing such processes not only helps automate payments, but it also reduces areas of high configuration in CTMS previously to support payment processing.  Oracle continues to evaluate strategies for payment processing as it moves forward with Oracle Health Sciences Clinical One. For more information on site feedback regarding payments, a link is provided to a white paper issued from the Society for Clinical Research Sites (SCRS).  This paper may be accessed here.

Did you know that issues over payments are one of the top complaints of investigative research sites working in clinical research?  Many investigator sites say that receiving payments...

Health Sciences

What You Need to Know about Clinical Trial Phases

In our modern age, small ailments through to serious diseases are treated with pharmaceuticals. But, before a pharmaceutical is made available, it must go through rigorous testing within a clinical trial to assure its safety and efficacy.  Clinical trials are critical because they supply patient based evidence that a drug can fight a disease. Yet many people have no idea how these studies work. The American Cancer Society (ACS) offers a detailed description of the clinical trials process.  The article below draws on ACS concepts as source material. Understanding how clinical trials work is important Theyare usually conducted in phases that build on one another. Each phase is designed to answer certain questions. Knowing the phase of a clinical trial is important because it can provide insight what is known about the treatment under study. There are pros and cons to taking part in each phase of a clinical trial. Phase I Clinical Trials: Is the treatment safe? The Phase I Study of a new drug is usually the first stage that involves people. The main reason for conducting a Phase I Study is to find the highest dose of a new treatment that can be given safely without serious side effects. Although the treatment has been tested in lab and animal studies, the side effects in people cannot always be predicted. These studies also help determine the best way to give the new treatment. Key points of Phase I Clinical Trials: The first few people in the study often get a very low dose of the treatment and are watched very closely. If there are only minor side effects, the next few participants may get a higher dose. This process continues until doctors find a dose that is most likely to work, while having an acceptable level of side effects. The focus in Phase I is to look at what the drug does to the body, and what the body does with the drug. Safety is the main concern at this point. Doctors keep a close eye on the people participating and watch for any serious side effects. Because of the small number of people in the Phase I Study, rare side effects may not be seen until later. Placebos (sham or inactive treatments) are not part of Phase I trials. These studies usually include a small number of people, typically, up to a few dozen.  These studies are typically done in units that run Phase 1 studies as a business.  Although, in oncology and other special studies -- depending on the therapy area -- they can be done at major hospitals, community hospitals, and doctors’ offices. Phase II Clinical Trials: Does the treatment work? If a new treatment is found to be reasonably safe in a Phase I study, it can then be tested in a Phase II trial to find out if it works. The type of benefit or response the doctors look for depends on the goal of the treatment. Or, the benefit may be that there is an extended period of time during which the patient does not get any worse.  In some studies, the benefit may be an improved quality of life. Many studies look to see if people getting the new treatment live longer than they would have been expected to without the treatment. Key points of Phase II Clinical Trials: Usually, groups of 25 to 100 patients with the same type of illness/disease/etc. get the new treatment in a Phase II study. They are treated using the dose and method found to be the safest and most effective in Phase I studies. In a Phase II clinical trial, all the volunteers usually get the same dose. However, some Phase II studies randomly assign participants to different treatment groups (much like what is done in Phase III trials). These groups may get different doses or get the treatment in different ways to see which provides the best balance of safety and effectiveness. Phase II studies are done at major hospitals, community hospitals, and doctors’ offices. Larger numbers of patients get the treatment in Phase II studies, so there is a better chance that less common side effects may be seen. If enough patients benefit from the treatment, and the side effects aren’t too bad; the treatment is allowed to go on to a Phase III clinical trial. Along with watching for responses, the research team keeps looking for any side effects. Phase III Clinical Trials: Is it better than what’s already available? Treatments that have been shown to work in Phase II studies usually must succeed in one more phases of testing before they are approved for general use. Phase III clinical trials compare the safety and effectiveness of the new treatment against the current standard treatment. Because doctors do not yet know which treatment is better, study participants are often picked at random (called randomized) to get either the standard treatment or the new treatment. When possible, neither the doctor nor the patient knows which of the treatments each patient receives. This is called a double-blind study. Randomization and blinding are discussed in more detail later. Key points of Phase III Clinical Trials: Most Phase III clinical trials have a large number of patients, at least several hundred. These studies are often done in many places across the country (or even around the world), at the same time. Phase III studies are often done at major hospitals, community hospitals, and doctors’ offices. These studies tend to last longer than Phase I and II studies (24 to 48 months). Placebos may be used in some Phase III studies. But, they are never used alone, if there is a treatment available that works. As with other studies, patients in Phase III clinical trials are watched closely for side effects; and treatment is stopped if they are too bad. Submission for FDA Approval: New Drug Application (NDA) In the United States, when Phase III clinical trials (or sometimes Phase II studies) show a new drug is more effective and/or safer than the current standard treatment, a new drug application (NDA) is submitted to the Food and Drug Administration (FDA) for approval. The FDA then reviews the results from the clinical trials and other, relevant information. Based on the review, the FDA decides whether to approve the treatment for use in patients with the type of illness for which the drug was tested. If approved, the new treatment often becomes a standard of care, and newer drugs must often be tested against it before it is approved.   If the FDA feels that more evidence is needed to show that the new treatment's benefits outweigh its risks, the agency may ask for more information or even require that more studies be done. Phase IV Clinical Trials: What else do we need to know? Drugs approved by the FDA are often watched over a long period of time in Phase IV studies. Even after testing a new medicine on thousands of people, the full effect of the treatment may not be known. Some questions may still need to be answered. For example, a drug may get FDA approval because it was shown to reduce the risk of cancer coming back after treatment. Nevertheless, does this mean that those who get it are more likely to live longer? Are there rare side effects that have not been seen yet? Or, are there some side effects that only appear after a person has taken the drug over a long period of time? These types of questions may take many years to answer. They are often addressed in Phase IV clinical trials. Key points of Phase IV Clinical Trials: Phase IV studies look at drugs that have already been approved by the FDA. The drugs are available for doctors to prescribe for patients. But, Phase IV studies may still be needed to answer important questions. Phase IV studies may involve thousands of people. This is typically the safest type of clinical trial because the treatment has already been studied in detail and may have already been given to many people. Phase IV studies look at safety of the treatment over time. These studies may also look at other aspects of the treatment, such as quality of life or cost effectiveness. Patients can get the drugs used in a Phase IV trial without enrolling in a study. The care they receive is very much like the care they would receive outside of a study. But, unlike treatments that are not part of Phase IV clinical trials, here, patients are helping researchers learn more about the treatment and doing a service to future patients. James Streeter is Oracle Health Sciences.Global Vice President, Life Sciences Product Strategy.  

In our modern age, small ailments through to serious diseases are treated with pharmaceuticals. But, before a pharmaceutical is made available, it must go through rigorous testing within a clinical...

Health Sciences

Realizing Clinical Trial Value and Business Efficiency in the Cloud

One of the most interesting aspects of organizations moving to the cloud has been increased scrutiny around Value.   Increasingly, many of our customer interactions feature one or more conversations around business value and operational efficiencies.   Value is important because it allows organizations to focus on what is important with a new IT project. It helps manage and maintain scope, cost and risk.   Value  can be quantified  at any stage using business objectives during and post implementation to ensure continued, focused delivery through organizational alignment. This article discusses value in relation to multiproduct platform solutions in the Oracle cloud.   Value Assuming that a customer is considering deploying one or more Oracle products and associated services, it becomes necessary to quantify the opportunity cost for the change and balance the potential derived ROI benefits. The following describes the value associated with Oracle Health Sciences Data Management Workbench (DMW). Additionally, there are significant operational benefits associated with moving to the cloud that also should be considered and quantified. Figure 1.  One definition of value can be represented as the freeing up of cash flow as a result of decreasing direct and indirect costs, increasing revenue, or decreasing investment in fixed assets.   By increasing incremental free cash flow, value is generated within the enterprise. Business value is at the heart of any business case and is, in essence, the key driver for any new implementation.  It is evident that the business case at the very least requires three components: Understanding what product capabilities are provided by the solution and the business benefits:  For example, Oracle DMW helps organizations automatically aggregate, clean and transform data, which they may currently perform manually.  The business benefit is automation and making data more available to a larger number of downstream consumers.  Understanding how the business benefits translate to operational improvements, which are characterized by operational metrics.   For the example, using Oracle DMW, an organization can expect a decrease in the operational costs associated with study setup, data cleaning, and data transformation. Examples of operational metrics that capture these improvements are: Cost per change order, $ Cost per data load per study, $ Cycle time from raw to review model, days Number of man hours spent on validation and testing for new study build, man hours For example, a study build in Oracle DMW can be performed in less than an hour. Some organizations take over eight weeks to perform the same! Understanding how operational metrics translate into financial metrics.  Operational metrics, such as the cost of operations, translate directly into financial metrics, such as direct and indirect operations costs.   However, the link between operational metrics and financial metrics is not always clear and may even overlap between different operational metrics.  Usually the activity to translate operational metrics into financial metrics is performed by the customer using proprietary data.      Figure 2.  Operational metrics are intimately tied to financial metrics that appear on financial statements. In addition to these elements, the business case should also describe how the benefits would be realized through the implementation and subsequent go-live timeline.  Value may continue to be derived well after go live and may even increase through greater organizational adoption of the solution. Measuring Value by Implementing Oracle DMW So, given this approach to quantifying value, how can this be applied to implementing DMW?     Figure 3.  Value categories impacted by implementing Oracle DMW. To help our customers navigate the value realization roadmap, Oracle Health Sciences (OHS) has developed a comprehensive tool that describes product features, business benefits, operational metrics, and  mapping to quantifiable value categories.  With this tool, OHS customers can create a pragmatic, focused approach to a successful DMW implementation that can form the foundation of their clinical data collection and management strategy. If you would like to know more about this approach and would like to use the tool with your customers, please don’t hesitate to get in touch. If you have any questions or suggestions based on what you’ve just read or what you would like to read about, I’d love to hear from you. Contact me at: srinivas.karri@oracle.com

One of the most interesting aspects of organizations moving to the cloud has been increased scrutiny around Value.   Increasingly, many of our customer interactions feature one or more conversations...

Life Sciences

The FHIR RESTful Services Standard

Real world data generates real world evidence.  Many use cases drive the biopharma organization’s needs to do this.  These use cases can aid many areas within the biopharma organization including: the commercial, health economics and outcomes research (HE&OR), early development, translational research, corporate strategy, safety, and clinical R&D groups.  Therefore, real world evidence substantially enhances the effectiveness of the overall biopharma organization.  This post discusses how a biopharma organization’s clinical R&D team can populate case report forms (CRFs) in an electronic data collection (EDC) application from electronic health record (EHR) systems using in Oracle Health Sciences InForm and the emerging HL7 standard, known as Fast Healthcare Interoperability Resources (FHIR). The FHIR standard is a specification for implementation of RESTful Services* (a technology approach to building APIs). It enables access to patient data in EHR systems in support of system to system communication (e.g. interoperability).  The FHIR standard is actively under development by the HL7 standards organization and is maturing rapidly from investment over the past 5+ years. Some of its specifications are implemented in several EHR vendors’ systems.  So, code can be written once to the interface standard; and data can be accessed from supporting electronic health record (EHR) systems.  While it is not a fully mature standard, yet, it certainly has substantial momentum in the patient care technology arena.  Over the last couple of years, there has been a focus on using FHIR RESTful Services to integrate patient care data into the clinical research process.  A key use case has been populating electronic data collection (EDC) system case report forms (CRFs) from EHR systems.  This use case can help the biopharma company and its sites participating in a clinical trial to: 1.       Reduce overall data entry volume for each clinical trial 2.       Improve quality of entered data for each clinical trial 3.       Reduce clinical trial costs as data populating the CRF from the EHR system does not have to be    source data verified.  Source data verification is typically done at each site participating in the clinical trial by biopharma-retained employees known as contract research associates (CRAs). Exploring FHIR right now, there is clear evidence that biopharma companies are experimenting with EHR to EDC integration.  Recent discussions with several top biopharma organizations indicate interest as companies look to exploit the benefits of this approach.  HL7 Connectathon Connectathons support quick and easy experimental approaches to developing the various standards the HL7 organization produces.  HL7 continues to use this technique, now over 20 years old, as it develops the FHIR RESTful Service Standard.  A connectathon is a weekend “hacking” session. It   includes members of healthcare organizations who attend and work to “prove” the standard.  They hack together “quick and dirty” working code (a FHIR RESTful Services working prototype) demonstrating that the standard actually will work in the real world. These events occur two to three times per year as the standard is developed.  Oracle Health Sciences (OHS) is participating in the 2017 FHIR Hackathon this autumn.  Currently OHS is building an integration using the HL7 FHIR RESTful Services with InForm. The integration enables mapping patient data from an EHR system into the appropriate trial visit CRF schedule. It can include information such as demography, vital signs, and more.  The integration initiative will help OHS participate in the upcoming HL7 FHIR Connectathon this fall.    The InForm Portal is a key component enabling a user to configure basic mapping metadata required for the integration of: 1.      Which fields in each CRF form map to which fields in the source EHR system 2.      Code list mapping for CRF filed referenced in #1, above 3.       Which patient care visit data ranges (aka Encounter) map to which visits in the visit schedule 4.       The initiation of the data movement from the source EHR to the targeted InForm system This is all accomplished by using FHIR RESTful Services in the standard as they interact with InForm’s Clinical APIs. Customer Focus This summer, OHS is running the first meeting of a newly formed patient EHR data driven Clinical R&D Customer Focus Group and plans to showcase the FHIR RESTful Services to InForm integration.   We’d like to gather customer feedback for this as a very powerful, high value-add, real world data/ evidence use case.  In addition, we will share these advances with our customers as we continue to help them understand what is possible with the HL7 FHIR Standard and InForm’s robust clinical API capability.  OHS is at the beginning of its FHIR RESTful Services journey with biopharma customers.  The early steps include working with customer partners running initial pilots with participating clinical trial sites.  In 2016, OHS worked with a large healthcare organization and a big pharma company on developing pilots in this area. Additionally, two other large biopharmas are considering running FHIR RESTful Services pilots on Phase 1 oncology sites and a  COPD observational study.  Stay tuned for progress updates in this exciting, new innovation area. Greg Jones is responsible for Enterprise Architecture Strategy for Oracle's Health Sciences business.    *Representational state transfer (REST) or RESTful Web services are one way of providing interoperability between computer systems on the Internet.

Real world data generates real world evidence.  Many use cases drive the biopharma organization’s needs to do this.  These use cases can aid many areas within the biopharma organization including: the...

Life Sciences

Oh No! Johnny's Getting Hypertension Again

Imagine if we could remotely connect to subjects in clinical trials and measure their vitals. Imagine if we could take those measurements, in real time, and deliver them to an investigator’s Oracle Health Sciences InForm desktop solution for his/her review. Imagine if a subject felt less burdened by staying home and having his six weekly Investigator Meetings from the comfort of his home, instead of making a two hour commute to the clinic.  Imagine when the subject arrives at the clinic, he can’t find a car parking space. So by the time he reaches the clinic, he is really frustrated and fuming mad that he has to be on this “damn clinical trial in the first place.”         “Sit down Johnny.  Let me take your blood pressure. It seems a little high.  I think you have hypertension…”           “NO! I DO NOT!!! I tell you what I do have.  It’s A PARKING TICKET!  I am quitting this clinical trial right now. GOODBYE!” Imagine keeping Johnny as a subject on the trial because he used a remote wearable sensor device. Imagine his data flowing seamlessly into Oracle InForm for site review, and simultaneously, into Oracle Health Sciences Data Management Workbench (DMW). Imagine the data instantly transformed into SDTM data formats and made immediately available to data managers for downstream, actionable, medical reviews and clinical monitors. Well, the good news is, you don’t need to imagine these things for much longer. The Oracle Health Sciences (OHS) team has just released its first, initial component for its mHealth stack. This library allows app developers to create  IOS/Android mHealth mobile apps that connect to the Oracle Cloud. The OHS team welcomes the opportunity to leverage this exciting new offering to enable connected scenarios.                   For more info please watch this video. Jonathan Palmer is Oracle Health Sciences Senior Director, Product Strategy.

Imagine if we could remotely connect to subjects in clinical trials and measure their vitals. Imagine if we could take those measurements, in real time, and deliver them to an investigator’s Oracle...

Life Sciences

EMR to EDC for RWE

A new real world data/real world evidence (RWD/RWE) industry trend is emerging.  That is, electronic medical records (EMRs) to electronic data collection (EDC) integration.   Here, instead of trial research staffers entering patient history data information, the sponsor implements the “wiring” to take the data directly from the site’s EMR system. Several members of the Oracle HSGBU team have been working on this with our biopharma and our healthcare provider customers including myself, Paul Boyd, Kim Rejndrup, and Chris Huang.  Here’re a few of the things we’ve learned: -In the summer of 2016 the FDA published guidance around considerations for biopharma organizations as they consider moving their EMR to EDC integration. -Every EMR system out there will have only a subset of the information necessary for a particular clinical trial and that subset will vary by clinical trial and therapeutic area.  It’s too early to know precisely what percentage of coverage will be available.  So right now, the expectancy is 40-70 percent (40-70%) on average, per trial.  Again, this is an evolving learning experience in relation to our work and our partnerships with our customers. -Here are some powerful numbers Paul Boyd put together. 1) There are 745,000 data points in a production trial. 2) Approximately 300,000 data entry strokes could be saved, if 40 percent (40%) of the data from EMR could be mapped.  There can be substantial savings of time and energy with EMR to EDC integration!  These numbers also produce savings on the monitoring side of the equation.  Data fields in the InForm customer report forms (CRFs) sourced from the EMR system don’t have to be source data verified (SDV’d) by sponsor employees monitoring the clinical trial at each site (usually known as contract research associates).  Therefore, there will be substantial savings as a major benefit, as well. -Academic Medical Centers (AMCs) that participate in clinical trials as sponsors, have their research staff perform the dual data entry as explained above.  In recent discussions with AMCs, they’ve also indicated that sometimes they’ll set up a “shadow” EDC system for what they own and run for a particular trial, and input the data, yet again (in triplicate!).  They want to capture the data they are providing to the sponsor in their own, internal, research database to advance their various research programs. -Our customers are interested in the APIs and various technology interfaces Oracle Health Sciences supports on the InForm platform.  These interfaces are key to allowing them to bring data from sites’ EMR systems into their InForm clinical trial instances to capture the above stated benefits. -Our customers are interested in Oracle Healthcare Foundation (OHF).   Imagine this scenario. A large biopharmaceutical organization is running 100 plus clinical trials.  It has piloted EMR to EDC integration in several trials over a couple of years.  Now the organization is ready to ramp up this capability as a standard element for many of the clinical trials in its portfolio.  Each trial on average has about 65 sites (that number comes from our experience with production trials run in our Health Sciences Cloud over the years).  Plus a number of sites the biopharma deals with will likely participate in multiple trials with it – not just the one trial.  Some percentages of the sites in the trial (an educated guess is 50-80 percent (50-80%) of them) have the ability to provide EMR data to the biopharma’s EDC system for that clinical trial.           Here’s the challenge. Does the biopharma “wire” each individual InForm instance for each trial to every site?  That’s definitely doable from a brute force perspective. But probably not idea, as it will be very expensive.  Or does the org use a technology like OHF as a platform to integrate receiving data from sites and doing all the wiring and plumbing to those sites with the platform.  Then as new trials start up, they “wire” InForm to the OHF platform, which serves as a “hub”; and go live costs likely are substantially smaller. -An exciting HL7 standard known as FHIR (Fast Healthcare Interoperability Resources) appears to have a decent amount of traction in the industry.  Organizations such as Cerner, Epic, Meditech, and GE Healthcare have implemented this RESTful* Service based API on top of their recent EMR application releases.  Organizations are starting to use these APIs to build and deliver new applications and mobile apps to improve patient and staff healthcare activities.  In addition, the FHIR APIs allow support for interoperability among systems.   With this insight into one of the more exciting use cases for Real World Data & Real World Evidence.  EMR to EDC integration will continue to be explored with our customers.  The Oracle Health Sciences team plans to be there and working closely with them as the technology, policy, and business process challenges are resolved.  Ultimately this will become a mainstream approach increasing clinical research efficiency to deliver new therapies to patients more rapidly.   *Representational state transfer (REST) or RESTful Web services are one way of providing interoperability between computer systems on the Internet.   Greg Jones is responsible for Enterprise Architecture Strategy for Oracle's Health Sciences business. 

A new real world data/real world evidence (RWD/RWE) industry trend is emerging.  That is, electronic medical records (EMRs) to electronic data collection (EDC) integration.   Here, instead of trial...


Metadata Management in Clinical Trials

Metadata management in clinical R&D is centered on the concept that each piece of data collected for a clinical trial, as defined by that trial’s protocol, can be managed independently.  Each piece of metadata and logical groupings of many metadata items together can be governed and managed in the organization. This includes version control, data edit rules for that data item, and transformation rules for that data item as it changes to support the analysis process.  In addition, it also supports the ability to trace that data element through the entire clinical trial lifecycle. This trace-ability extends from the time it’s initially captured during the clinical trial through to that data element’s submission to regulators.  This trace covers all information on how that data element contributes to proving the efficacy and safety of a new therapy for regulatory approval.   As a quick example, blood pressure can be a metadata item for a clinical trial.  One can attach edit rules to that blood pressure item to insure accuracy. When the user inputs the data, the rules assure that it is edit-checked properly and that transformation logic is defined to change the representation of the blood pressure data element during data entry to a completely different representation for an analysis dataset that will be written in SAS code to prove efficacy and safety of the therapy.   I can do all that and maintain version control of that blood pressure data element as I change its representation over time.  Here’s another example. If I use that blood pressure data element consistently across all my clinical trials, then when I change that data element and produce a new version; I can query my metadata system on which clinical trials in my portfolio will be impacted by changing that blood pressure data element.   If I am running multiple clinical trials in which each has potentially hundreds of data items to be collected, then metadata management can help me manage those clinical trials operationally.  Metadata management also helps me to insure that I maintain full regulatory compliance and trace-ability as data goes through its lifecycle from capture to submission.  As mentioned at the beginning, this industry trend has been a long time coming.  The industry move over the last 10 years away from paper based clinical trials to electronic data capture based trials set the stage for this type of capability and for the operational savings from a successful metadata management solution deployment.  Unfortunately,expanding from the basic functionality described and scaling it up across a large scale clinical trial operation has proven to be very elusive to date for many organizations.  The metadata management highway is littered with several organizations that have failed to y deploy it successfully in their complex clinical trial environments.  The reasons for failure are complex, including the challenge of the activity and the resulting impact on the business processes of these large, complex, early- pioneering organizations.  Recently, there’s been a new wave of momentum!  Oracle Health Sciences’ (OHS) powerful partner ecosystem around its clinical R&D applications is kicking in to drive the next set of attempts at metadata management.  Specifically, OHS partner Accenture is leading the charge in close collaboration with two OHS top customers - GlaxoSmithKline (GSK) and Eli Lilly & Company – to tackle, once again, this very complex problem space.   Accenture is working to build a module called Metadata Registry (MDR).  The Accenture, GSK, and Lilly team is working to build this module with the above mentioned capabilities.  Progress is very promising to date!  They do have a large number of the above mentioned capabilities implemented successfully and are going through testing with GSK and Lilly.   In addition, the team will ultimately integrate the MDR with OHS Central Designer and Data Management Workbench applications.  This integration fulfills the promise of the enormous value of the MDR module.  Clinical trial metadata can be managed, version controlled, and more, within the module. Then, those same metadata data can be pushed into our Central Designer and Data Management Workbench applications at study startup and for the management of changes to study(s) in progress.   This will reduce the amount of time it takes to manage clinical trials operationally in each, respective, company’s portfolio and will increase the trace-ability and audit-ability quality each company needs for regulatory compliance.     Greg Jones is an Enterprise Strategy Architect with Oracle Health Sciences.

Metadata management in clinical R&D is centered on the concept that each piece of data collected for a clinical trial, as defined by that trial’s protocol, can be managed independently.  Each piece of...


Population Health and Oracle Healthcare Foundation’s Partner Ecosystem

As the Population Health Strategist for Oracle Health Science (OHS), I enjoy the ability to continue my educational quest for healthcare knowledge.  Between reading the Federal Register and House bills, catching up with CSPAN, and keeping upwith my friends from industry, I am amazed at the number and variety of population health applications currently available.   When I first started 20 years ago, the concept of “population health “ was something closer to evidence based medicine. Today, population health is synonymous with a range of US healthcare market subjects including: patient identification, cost analysis, clinical care gaps, precision medicine and early identification,outcomes measures,and EHR Implementations. We, who are on the OHS team, look at the idea of population health from the ground up. We aggregate a healthcare organization’s data and use it multiple times for any question posed, today, tomorrow, a year from now, or five years from now.   Our OHS team has invested in Oracle Healthcare Foundation (OHF) as the data aggregation and normalization engine that can fulfil population health discovery and care transformation, both clinically and financially. OHF offers a fit-for-purpose, analytics platform that provides a data acquisition, data integration, data warehousing, and data analytics solution.   The solution meets and exceeds current market conditions for organizations evaluating Value Based Care, Quality Measurement Performance, and Internal Cost and Care Team Effectiveness. It evaluates an organization’s information, turning data-driven insight into action.  In addition, OHS is active in recruiting top-quality population health partners to build out our partner ecosystem. These efforts leverage and extend OHF in the vast population health analytics space. OHS also invites its partners to join the Oracle Validated Integration Program. Prior to the invitation, the OHS strategy team takes into account several, fluid factors including: · The organization’s ranking according to IDC, KLAS, Gartner, Forrester, and other analyst organizations · A review of how the partner organization addresses the healthcare market’s business needs · A review of the landscape for emerging healthcare trends and initiatives at federal, state, and local levels  Oracle Validated Integration, available through the Oracle PartnerNetwork (OPN), gives customers confidence that the integration of a complementary partner software product with an Oracle application has been validated, and that the products work together as designed. This can help customers reduce risk, improve system implementation cycles, provide for smoother upgrades and ensure simpler maintenance. Oracle Validated Integration applies a rigorous technical process to review partner integrations. Partner companies that successfully complete the program are authorized to use the “Oracle Validated Integration” logo.  At this year’s HIMSS17 in Orlando, FL, OHS (Booth #3349) is pleased to join the population health conversation with our newest Oracle Validated Integration partners: ENLI Health Intelligence, SpectraMedix, and SCIO Health Analytics . These partners have developed and tested pre-built integrations between Oracle Healthcare Foundation and their population health analytics applications.   About Enli Health Intelligence Enli Health Intelligence™ is the market leader in population health management technology. Enli enables care teams to perform to their full potential by integrating healthcare data with evidence-based guidelines embedded in provider workflows across the population and at the point of care. @enlihealthintel,HIMSS17 Booth #2723 About SpectraMedix SpectraMedix empowers health systems, hospitals and other provider organizations to transition to fee-for-value and shared-risk programs using advanced quality measure, performance reporting and predictive modeling solutions in support of DSRIP, PRIME and operational activities.@SpectraMedix,HIMSS17 Booth #1889 About SCIO Health Analytics   SCIO identifies, risk stratifies and leverages predictive modeling (claims based) on patient populations based on actionable care gaps in order to design effective programs and meet value-based care delivery initiatives. @SCIOAnalytics Lesli Adams, MPA, is Director, Population Health Strategy for Oracle Health Sciences. Visit us in the Oracle Booth #3349 at HIMSS17 in Orlando, Feb 19 - Feb 23, 2017.

As the Population Health Strategist for Oracle Health Science (OHS), I enjoy the ability to continue my educational quest for healthcare knowledge.  Between reading the Federal Register and House...


Recap: Northern California HIMSS Innovation Conference and Showcase by Rahul Dwivedi

The Northern California HIMSS Innovation Conference and Showcase in Santa Clara this January was very well attended with industry think tank executives and high profile industry leaders. Sessions sponsored by industry leaders such as Intel, Oracle, Salesforce, HealthCatalyst, and others provided leading views on personalized healthcare, innovation, and the impact of artificial intelligence (AI) and machine learning (ML) on precision medicine and on healthcare industry, in general. The event featured discussions on innovation models for real-time analysis of key performance task or workflow indicators that could optimize processes and advance healthcare through more predictive, prescriptive information and insights. More specifically, panels addressed a range of issues including:   Innovative thinking from outside healthcare to transform personalized health and patient engagement through digital, mobile and the cloud. Genomic and IOT - behavioral, clinical, and genetic data capture/analysis - Intel’s “inside-out model.” How formal collaboration optimizes “inside-out” commercialization, research and tech transfer, and venture fund approaches to bringing ideas to fruition and fertile markets. Generating actionable insights and speeding innovation by continually looking at new and better ways to manage data. Understanding how machine learning can be woven into existing population health and personalized care business intelligence and analytics efforts. Agenda:The Northern California HIMSS Innovation Conference and Showcase In his keynote address, Bob Rogers, Intel’s Chief Data Scientist for Big Data Solutions, described the current state of healthcare and the future role innovation would play in access to information, patient engagement, logistics, and hand-off. He explained that these are areas in which innovation could have significant impact in creating value for all healthcare industry players, from patients, to providers, to HIT companies. Highlights from Panel Discussions The panel discussion on AI and ML for precision medicine and population health, led by Oracle Health Sciences Product Director, Prashant Natarajan, discussed healthcare specific innovations in ML field and the importance of going beyond existing models that worked successfully in other industries. Mr. Natarajan emphasized that healthcare innovations are created to augment the clinician's ability to serve individuals and tackle issues at population level, not to replace the clinician. Describing how AI and ML can help develop the right patient treatment at the right time, he also provided insight on the nature of data quality and how traditional amounts of data (small data) can work more effectively. From Left to right: Oracle’s Prashant Natarajan; Zeeshan Syed, MD, PhD, Clinical Assoc. Prof., Stanford School of Medicine; Mitesh Rao, MD, Prof., Stanford School of Medicine; Oscar E. Streeter, MD, FACRO, Center for Thermal Oncology; and  Bob Rogers, PhD, Chief Data Scientist for Big Data Solutions, Intel Corporation. After the discussion some audience members expressed opinions on how big data analytics, privacy, transparency, and data security were very important.  They agreed with various points put forward by Mr. Natarajan. There was also some additional discussion on how both clinical (population health) and omics (precision medicine) data were stored together as an integrated platform. Medigram CEO, Sherri Douville, offered a presentation on data democratization and data governance. The session provided great insights into what makes data governance important and how to achieve success in an organization when leading data governance efforts. The panel speakers provided great insight through their real life experiences and their comprehensive understanding of innovation and data governance. During the panel, Fail Fast, Succeed Fastor Fail Permanently - Emerging Models of Medical Device and Clinical HIT Innovation, entrepreneurs and finance industry speakers provided several,successful emerging models of innovation employed in several different companies. Companies from medical device production to healthcare information technology have been employing these models to propel innovation, both internal and external to their organizations. The session offered a great overview of identifying new ideas to match demand and building solutions to achieve scale that reaps benefits. Overall, this was a great gathering of academic think tank executives and industry leaders coming together to discuss the current state of innovation and AI-ML in the healthcare industry and what to expect in the future.   Rahul Dwivedi is Principal Applications Engineer, Oracle Health Sciences.

The Northern California HIMSS Innovation Conference and Showcase in Santa Clara this January was very well attended with industry think tank executives and high profile industry leaders. Sessions...


The Future of Value Based Care

The current healthcare delivery landscape is changing dramatically. Major regulatory reimbursement models are evolving from fee-for-service to fee-for outcomes and value. This transformation to value- based contracting involves multiple facets of the organization and an even greater demand for data. This shift requires health systems to leverage actionable patient outcome and cost analytics, as well as manage several other constraining challenges to address value based contracting, quality measure performance, internal costs, and care team effectiveness. Each health system must maintain market share in an environment of ever-increasing competition and ensure quality of care at reduced costs. Today, many organizations have an agnostic analytic strategy. They align their actionable intelligence to explore patient information, provider recommendations, resource utilization, and care outcomes.  All this information is evidence-based and demonstrates wise stewardship of resources. With this strategy, these organizations believe they’ll be prepared for the future. Over the past three decades, there has been a constant reformulation of acronyms for reporting mandates, a flurry of new scoring measures, and an expanding list of organizations that require reporting on utilization, treatment patterns, care outcomes, internal cost analysis, and reimbursement models.   To ensure that patient care at the point of care uses the right guidelines and the right utilization of service, with the right access to care without delay, deft organizations with agnostic analytic strategies should aggregate their enterprise data once and employ data governance to control its variability. These processes will ensure better clinical and financial decision making to optimize treatment planning.  In this way, these organizations will be more prepared for the inevitable changes in the future by employing nimble and flexible capabilities that can help meet the unpredictable legislation and reporting mandates ahead. The ability to deliver optimal care is dependent on data availability, actionable insight, and effective prioritization.  Join our conversation about the next generation of healthcare analytics that supports your organization’s population health, revenue cycle,care transformation, and clinical decision support activities. Register here. ###     Lesli Adams is Director, Healthcare Strategy for Oracle Health Sciences.

The current healthcare delivery landscape is changing dramatically. Major regulatory reimbursement models are evolving from fee-for-service to fee-for outcomes and value. This transformation to value-...

Life Sciences

Interpreting Big, Real World Data – the New Clinical Data Scientist Role

With cloud technologies becoming commonplace to store and manage big, real world amounts of clinical (genetic, BP, temp, etc.) and medical (EMRs, EHRs, outcomes, etc.)  data; and the growing popularity of wearable sensor devices to collect and transmit clinical trial patient data from remote locations on a continuous basis, the research world is brimming in terabytes of information*.  But what does one do with all this data? How can we sort through it to find those points that provide additional support for what is known about a drug’s effect on a disease? Better still, how can it be optimized to demonstrate breakthrough insights and new patterns in relation to the drug and the disease?  These questions pave the way for the introduction of a new research discipline -- data science. In her paper, Michaela Jahn, Global Clinical Data Manager at F. Hoffmann-La Roche, defined it as the application of a team's diverse informatics and analytical capabilities to retrieve and analyze data to support drug project decision making, drug and platform development. Data Science capabilities include bioinformatics, imaging informatics, biostatistics, data integration and visualization, text-mining, information science.   The Clinical Data Scientist This all sets the stage for the role of the clinical data scientist. Driven by the changes in data management, this role can be seen as an evolutionary step forward for the clinical data manager. Where the data manager was task driven and in reactive mode -- organizing and readying data for analysis -- the data scientist takes a more active discovery role looking for new anomalies and patterns in the clinical trial data that may suggest additional paths to insight. Additionally, the skills of the data scientist could prove critical for the clinical trial team. His/her discerning data discovery capabilities could not only help the team to  identify new data paths to explore, but also to guide them away from unproductive, cost/time wasting, data directions.  In her paper, Ms. Jahn defines the clinical data scientist as one who would require: comprehensive knowledge of all areas of data management and data delivery, an understanding of protocols, [the ability to interpret clinical study data, and knowledge of technologies needed for clinical studies from start-up to completion. She also recommends that the scientist be viewed as an equal partner in the study team and have the following attributes: • Have a clear understanding of protocols, their structure, primary and secondary endpoints (the accurate collection and extraction of data.) • Own oversight of study milestones and what is needed for data delivery • Conduct a data risk assessment (what data should be cleaned for a certain therapeutic area and what data can be left as is) • Understand the basics of statistics and programming • Support international standards • Understand the basics of the disease area • Oversee the external service providers • Adapt to new technologies • Help clinical scientist understand the data modeling and explore the data  The paper goes on to identify how the clinical data manager can become a clinical data scientist by evolve his/her skills in these areas.  Oracle Health Sciences solutions can help data managers grow into the data scientists by seamlessly eliminating the need to focus on the clinical trial software infrastructure, enabling the automation of daily intake tasks, and allowing budding data scientists to focus on using the solutions as tools for new and advanced data insights.  For instance, Oracle Big Data Discovery can turn massive amounts of raw, structured and unstructured data from systems like Hadoop into new insight in minutes. Oracle Health Sciences Data Management Workbench (DMW) provides an end to end, clinical data management solution for data cleansing, integration, and analysis. Oracle Health Sciences Clinical Development Analytics offers fast, fact-based insight into clinical programs for business decisions, increased R&D productivity and optimized, drug development efficiency. Oracle Health Sciences Cohort Explorer enables the clinical trial team to have self-sufficient capabilities to analyze and identify clinical cohorts. Oracle Health Sciences Empirica Study detects potential problems early in the pre-marketing clinical stage, enabling clinical R&D professionals to gain deeper insight into a drug in development’s safety profile. Oracle Healthcare Foundation offers actionable analytics for population health, precision medicine, and value based care.  Glassdoor lists the data scientist, as one of the 25 best jobs in America in 2016.Though, in a recent article Cio.com comments that currently, not only is there a lack of qualified talent for this emerging role, but also that companies hiring data scientists are still grappling with the most effective ways to utilize their skills. The amount of data collected in clinical trial will only grow larger.  In clinical trials, it will fall to the clinical data scientist to see new patterns and find new relationships in this data that eventually can save more lives and achieve better patient outcomes.  Finally, it would be interesting to combine the clinical data manager’s deep experience, as he/she evolves into a data scientist, and the new, creative, though less experienced perspectives of millennials coming into the clinical trial industry. Where experienced data professionals have deeper insight into data exploration, millennial scientists might provide fresh, unexpected attitudes on new data sources and combinations. Perhaps together these groups can optimize R&D data discovery even further.   ###   *The human genome typically includes a few gigabytes per person. Also, simply taking blood pressure (BP) three times/day in a two year, 500 subject, clinical trial is two million data points. (BP is two data points (Systolic, Diastolic). Therefore, 2 x 3 per day x 365 days x 2 years x 500 patients = about 2 million.) James Streeter is Global Vice President Life Sciences Product Strategy for Oracle Health Sciences.  

With cloud technologies becoming commonplace to store and manage big, real world amounts of clinical (genetic, BP, temp, etc.) and medical (EMRs, EHRs, outcomes, etc.)  data; and the growing...

Life Sciences

Real World Data vs.Real World Evidence

Now that advanced cloud technologies enable the collection, storage, and analysis of petabytes of information, pharmaceutical and biotech companies often use this information in the form of real world data (RWD) and real world evidence (RWE) for a wide variety of purposes including: health economics and outcomes research (HEOR), pricing, unmet needs, discovery/pre-clinical processes, and clinical R&D).  This post though, focuses on using RWD/RWE in Clinical R&D, and how these data can provide additional sources of proof for the safety, efficacy, and value of new drugs and therapies. Real World Data In Clinical R&D, though the terms real world data and real world evidence are used interchangeably, they are not the same. In a recent article, Accenture’s Jeff Elton explains that real world data is information gathered “…from myriad of sources -- the current standard of care, gaps and deficiencies in the care model, and patient reported outcomes -- that when linked together provide a view of a patient’s health history that can be acted upon using insights from advanced analytics.” Real World Evidence Conversely, real world evidence, which Dr. Elton describes as “a product of analyzed real-world data” and which can be generally recognized as “key conclusions that could be derived from [among other real world data sources] published studies in peer-reviewed journals,” can offer new, actionable insights. Dr. Elton goes on to say that “real world evidence involves using the growing wealth of real-world data, increasingly at the population level, to generate meaningful insights.”  For instance, groups within the American Society of Clinical Oncologists and the National Comprehensive Cancer Network use published studies to develop their recommendations for new treatment guidelines. Dr. Elton points out that advance analytics – using machine learning—can uncover new real world evidence (in the form of new data relationships and patterns) from real world data, and this can influence patient treatment planning. Real world data analytics output can also influence changes in clinical trial design – “how medical affairs experts may identify the ‘long responders’ to specific treatment approaches, and how commercial organizations evaluate the effectiveness of patient services programs.”  Pharmaceutical companies, too, can use real world evidence to uncover new treatment directions for specific conditions, establish follow-on research planning, develop value dossiers, and inform medical communications. The Patient-Centered Outcomes Research Institute is conducting innovative real world research studies testing therapeutic effectiveness “in a broad routine clinical practice.”  “Real-world evidence,” Elton states, “provides the validation between the results seen in regulatory clinical studies that initially supported approval, and the post-approval validation process, which showed that consistent or improving benefit was being realized.” Real world evidence can also supply health insurers with a means of assessing support of patient use and reimbursement charges.  Real World Data, Real World Evidence, and Oracle Oracle supports the evolving areas of real world data and real world evidence, not only for Clinical R&D, but also how they can impact the variety of health and research related areas mentioned above (in the first paragraph).  Keeping the focus on this post’s “RWD/RWE for Clinical R&D” angle, Oracle solutions can gather and transmit real-world data from a patient participating in a research study from the comfort of his or her own home. As a clinical trial participant, the patient wears a connected mHealth sensor with a unique identifier. The device remotely and continuously collects his/her real-world patient data, such as blood pressure and blood glucose levels, then sends the information, via Bluetooth, to the patient's mobile device. From there, the data is routed through Oracle IoT Cloud Service, an enterprise-class, highly secure, scalable, cloud repository that aggregates, summarizes, and disseminates the targeted data into Oracle Health Sciences InForm, or Oracle Health Sciences Data Management Workbench (DMW) to ready it for analysis. The data analytics are then combined with other patient data collected for the clinical trial and continue on to biostats efficacy and safety analysis. Ultimately, this data can become real world evidence submitted for regulatory approval of a new drug or therapy. Oracle’s Real World Data Analytics Platform for Life Sciences , which includes DMW, Oracle Life Sciences Hub and Oracle Health Foundation, and optionally , Oracle Big Data Discovery, Oracle Big Data SQL, and Oracle Big Data Appliance can analyze the data to provide the new insights into clinical trial progress, adverse events, and study outcomes.  Together they can add additional clinical trial results support for the efficacy safety, and value of new drugs and therapies, ultimately, saving more lives.  Oracle Health Sciences is always looking ahead to support the important data trends that drive innovation and advancement for today’s life sciences industry. Subsequent posts, will describe how Oracle supports RWD/RWE in additional areas for biotech and pharmaceutical organizations.  Want to learn more about Oracle’s RWD/RWE solutions? Contact: healthsciences_ww_grp@oracle.com ###   Greg Jones is an Enterprise Strategy Architect with Oracle Health Sciences.

Now that advanced cloud technologies enable the collection, storage, and analysis of petabytes of information, pharmaceutical and biotech companies often use this information in the form of real world...

Life Sciences

Data Quality: A Critical Factor in Risk Assessment

Risk is an important factor to consider when designing and conducting a clinical trial. But, in the clinical trial, what is risk, really? It’s all about the data and the quality of that data. What is being measured? Are data sources validated? How many sites are involved? Are they all measuring the same things? How accurate is the data? On what was it measured? How often was it measured? Today, collecting and sharing clinical trial data is easier than ever with the aid of cloud-based technologies. But, there still is no guarantee that the resulting data metrics from all systems can identify any risk in a given trial or help researchers make decisions about protocol changes or monitoring of a conditional event. To do so, a recent Applied Clinical Trials article contends, there needs to be cross-industry, standardized, metrics tools that track data quality performance and identify risk factors. Over 15 years ago, the Tufts Center for the Study of Drug Development (CSDD) first emphasized the importance of using standardized performance metrics for clinical data quality (and therefore risk). In 2015, the Metrics Champion Consortium (MCC) defined some of the benefits that can be derived from these kinds of tools: Establishing clear, consistent performance expectations for internal and external operations. Facilitating adoption of best practices across sponsors and services providers. Ensuring consistent measures [which] reduces the garbage in-garbage out problem. Avoiding the cost of custom programming. Supporting comparison of performance across all studies within an organization, including across multiple vendors. Decreasing time spent trying to understand what is being measured and focusing on achieving meaningful process improvement. Oracle Health Sciences, Data Quality, and Risk Assessment Additionally, Oracle uses its CTMS system for risk based monitoring centralized issue management - tracking. As risks are identified within a clinical trial via review of patient or site data in any of the Oracle Health Sciences products (including Oracle Argus Safety, Oracle InForm, Oracle CDA), or by a partner solution (such as Cluepoints CSM ); the Oracle CTMS solution provides a centralized location to track and record the risk, the mitigation action for the risk, and the result. This data is then available in CDA to provide a consolidated report of risk, actions, and resolutions for submission with the study to regulatory agencies. The MCC also created standard definitions for common data elements from site activation to database lock. Additionally, specific to its Risk Based Monitoring initiative, TransCelerate has defined standardized tools for assessing and monitoring risks. The Oracle Health Sciences team, understanding the importance of data quality, endorses standardized tools and metrics, and supports standardization by building accepted standardized, industry metric tools into its clinical trial solutions. These tools help clinical researchers identify risks and pinpoint trial events that require further tracking or intervention. Oracle has moved TransCelerate’s* Risk Assessment Categorization Tool (RACT) into Oracle’s Siebel Clinical Trial Management System (CTMS) . Taking a holistic view of how risk can affect the entire trial lifecycle in any functional area of the study, it helps study planners identify potential study data factors that will require risk management from the trial planning through analytics. Through the data results, Oracle’s CTMS solution can identify risk areas for a drug and adjust for it, based on the results per trial. It can then roll-up those results and provide deeper insights on the full trial risk at the program level. Oracle has also incorporated TransCelerate’s Key Risk Indicators into out of the box dashboards in its Oracle Health Sciences Clinical Data Analytics (CDA) solution. With this tool in CDA, study teams can set up thresholds to gauge the status of each indicator at the study or site level and determine appropriate actions to be taken based on the Key Risk Indicator, as defined in their monitoring or quality plans. Standardization of data and reporting capabilities will become even more critical as more clinical trials are implemented with a risk based monitoring approach. Ensuring there are standard processes and data reporting capabilities will allow companies to assess their risk based monitoring approaches and actions to ensure effectiveness. Read more on Oracle’s view of source data validation and risk assessment in the white paper Beyond SDV: Enabling Holistic, Strategic Risk-Based Monitoring More questions? Contact: healthsciences_ww_grp@oracle.com *Transcelerate Biopharma Inc. is a nonprofit organization with a mission to collaborate across the biopharmaceutical research and development community to identify, prioritize, design and facilitate the implementation of solutions to drive efficient, effective and high-quality delivery.

Risk is an important factor to consider when designing and conducting a clinical trial. But, in the clinical trial, what is risk, really? It’s all about the data and the quality of that data. What is...

Life Sciences

Application of Artificial Intelligence for Clinical Development

The application of artificial intelligence (AI) is becoming ubiquitous in our daily lives.  Throughout the course of a typical day,  one uses a variety of applications and devices that automatically understand what is spoken and provide near real-time feedback to support decision-making at an unprecedented scale.  The question being asked now is how can Machine Learning (ML), one of a number of AI techniques, be used in the clinical development space, and, does it hold any value to accelerate development timelines and/or reduce development costs?   About  Machine Learning and Natural Language Processing ML encompasses a variety of algorithmic techniques that can be used to identify and infer patterns to support enhanced/automated decision making.  Natural Language Processing (NLP) is an algorithmic technique used in the ML space.  Using NLP, an application can be used to ‘read’ scientific text and infer the semantic context of the text, so that a human can search and find information more easily.  See the Supervised ML Figure 1 below.   In the clinical context this could include: Given a patient’s historical health data, predict propensity of a disease Given historical data about clinical trial locations, predict their risk profile Given a set of documents, build document clusters so that documents that are about the same topic are in the same cluster Find all the groups of patients that are similar to each other It is clear with the examples above, that the promise of machine learning has the potential to enhance decision making significantly throughout clinical development and beyond.  The principal benefit with ML and NLP is that they can be used to replace the analysis work performed by a human and can be infinitely scaled up as the volume and variety of data grows. What’s Behind the Magic? ML uses an algorithmic approach taking both structured and/or unstructured historical data through a mathematically driven process to generate a model that can recognize patterns and contextual meaning.   This process takes as its inputs:     Training data sets, used to train the ML algorithm and much larger input datasets used for subsequent analysis Standard medical dictionaries that provide reference word or term definitions Ontologies or textual annotations that describe relationships between terms A probability driven mathematical model that can be “trained” to address the types of input datasets to be processed (ex. scientific literature vs. Twitter feeds, each of which have very different contextual, semantic and linguistic characteristics)  Using these inputs, the ML algorithm can be trained to read the training datasets and extract relationships between the terms.  Additionally, a human can verify the accuracy of the relationships and “fine train” the algorithm, so that it reaches a level of comparable accuracy.  At the end of the training process, the larger input datasets are processed by the ML algorithm to extract new relationships to build a bigger picture and better understanding of the area of interest.  A significant advantage with ML is the ability to scale.  Once the algorithm has been trained, it can be used across ‘big data’ data sets. How Is Oracle Health Sciences Using Machine Learning? Oracle Health Sciences (OHS) has made ML part of its clinical platform technology to provide new and innovative business capabilities to industry.  Within Oracle there are world class ML experts, who, between them, have decades of expertise implementing AI based solutions across a variety of industries.  OHS is actively collaborating with Oracle Labs, in particular the Information Retrieval and Machine Learning Group (IRMLG).  As part of this collaboration and to illustrate the utility of AI, Oracle has developed a safety solution that uses ML to identify new Adverse Drug Events from a variety of data streams.  This solution uses OHS solutions and the expertise of the IRMLG to provide an intelligent process for automated case intake processing.     Fast Forward to the Future So what does the future hold for AI applications supporting drug development?  For the first time there is a nexus among the widespread availability/ accessibility of data, availability of AI tools, and very low cost of computation, all driving the development of AI techniques to augment and support what are currently highly human centric activities.  Only within the last few years, it’s apparent that there’s been an acceleration from typing to dictation on our mobile devices without a second thought.   This is primarily driven by ML techniques applied to big data.  Looking forward, the explosive growth in the area of Data Science, which capitalizes on AI technology, is set to deliver new capabilities that will transform how to use and learn from data.  For example, it is quite plausible that clinical development will benefit from AI by identifying new drug candidates using scientific literature, by optimizing clinical study execution via predicting study performance and execution and by assisting with post study statistical analysis. Srinivas Karri is Clinical Warehousing Cloud Strategy Director for Oracle Health Sciences.  Special thanks to Pallika (pallika.kanani@oracle.com) and John (john.k.mclaughlin@oracle.com) for their contributions to this article.

The application of artificial intelligence (AI) is becoming ubiquitous in our daily lives.  Throughout the course of a typical day,  one uses a variety of applications and devices that automatically...

Health Sciences

Clinical Research as a Care Option

Today the trend in healthcare is to provide the patient with as much value as possible. Though value comes in many forms, an important one that is often given less priority than others is patient engagement.  That is, enabling the patient to take a more active role in the treatment of his or her condition.  One study by the UK’s Centre for Health Policy focusing on patient engagement sought to shift the clinical paradigm from determining “what is the matter?” to “what matters to the patient?” Researchers found that increased patient engagement improved health outcomes and reduced costs, while it also aided insights into the data around the condition in question. One avenue for increased patient engagement is giving the patient the information that will enable him/her to participate in a clinical drug or therapy trial as a care option targeting his/her condition. A recent survey conducted by Eli Lily , Wilmington Health, PMG Research, Quintiles, Pfizer, and Harvard asked patients about their participation in an ongoing, four year, clinical trial on diabetes.  “The survey results consistently demonstrated a high and increasing level of patient satisfaction and engagement with clinical research, including satisfaction with access to care, efficiencies in care delivery, and the quality of care provided by the research staff,” said Katherine Vandebelt Global Head of Clinical Innovation, Eli Lilly and Company in a recent bylined article. The survey reported additional patient comments including:  “It [clinical trial participation] made me much more motivated to work on my diabetes.” and “Study participation has allowed me to manage my diabetes better than I ever have before.” Other survey findings were just as glowing. One hundred percent (100%) of participants thought that participating in clinical tresearch reducedtheir overall cost of heatlhcare and improved their overall interest and invoivelment in their overall helathcare. Ninely five percent (95 (% )thought that their participation had improved their overall quality of care.   Quintiles: Clinical research participation as a care option. 2015. [Whitepaper] The irony of these results is that less than one percent (1%) of Americans participates in clinical trials. Yet, a whopping 72 percent (72%) say they would if their physician allowed it. However, it is difficult for a patients to find a trials that fit his/her personal condition, let alone have a discussion with his/her physician on what is best for his/her specific case. Vandebelt advocates considering clinical trials as a medical care option because they have the potential to improve overall health outcomes. She goes on to explain that, unfortunately, today, clinical trials are treated as something separate from care. She says that needs to change She continues “ …This paradigm shift [to regard the clinical trial as a commonplace care option] will arise only from collaboration among patients, healthcare providers, health care systems, drug developers, and policy makers to realign clinical trials to center around the needs of the patient, not just around the collection of clinical data. When the model is structured to reward patient-centricity, new opportunities emerge to reinvent the value proposition for clinical trials, and clinical research can be more readily integrated into the overall continuum of care.” Oracle endorses patient engagement and centricity – including acknowledgement of the patient as the very important source of real world data in clinical trials and post-trial marketing research—via its mHelath initiative. This effort enables patients to participate in clinical trials and supply their real world data from the comfort of their homes, via wearable sensors sending data to remote devices, which upload the information to the Oracle cloud and then into Oracle Health Science solutions. According to the Quintiles white paper on the aforementioned survey, patients are more engaged when they can participate in managing their personal health, which not only leads to lower total cost of care and better outcomes, but also to improved patient satisfaction. James Streeter is Global Vice President Life Sciences Product Strategy for Oracle Health Sciences.

Today the trend in healthcare is to provide the patient with as much value as possible. Though value comes in many forms, an important one that is often given less priority than others is patient...

Life Sciences

From Big Data to Smart Data

Over the last few years we have all been inundated with the concepts of Big Data. There are 6,000 tweets from Twitter every second. Facebook stores 30 petabytes of user data. A Boeing Dreamliner generates one terabyte per flight. A Formula One car generates three terabytes in one race. Now we know why it’s called “big data”. In Health Sciences, big data is seen in a number of forms, such as the human genome (typically a few gigabytes per person, sequenced today at a cost of $1K vs. $10M ten years ago), vast amounts of epidemiology data (claims, safety, post-marketing data) and sensor/device data. Simply taking blood pressure three times per day in a two year, 500 subject clinical trial is two million data points. This is a lot of data. It could really change the way research is  perform research, how new therapies are discovered, and how patients are treated. However, it’s also a lot of stuff. You know, that ‘lot of stuff’. That’s what’s in your cupboards, garages, and houses, that you never, ever look at, and typically, have no idea what it is. You just know it’s taking up space and gathering dust. So, it’s better to use the term Smart Data. While this is not a new term, it is becoming more widely used. It also helps users move away from the over hyped “Big Data” and understand the opportunity of big data. The key to using this data is to understand how to store, explore, aggregate, analyze, present and act on it. To have meaning, and hence to be “smart,” data needs to have context. It needs to be correlated with something. There is no point in collecting heart rate every second of a two year clinical trial, if the heart rate data cannot be aligned to the exact date and time that the investigational drug was taken. When that analysis is done, it may very quickly show that Caucasian male subjects, aged of 35-45, show a 30%-60% increase in heart rate (HR) four to eight minutes after a dose, as compared to females of same profile. By further combining this physiological data with genomic data, it may become apparent that a further subset (cohort) of subjects with a particular genetic marker show a 70%+ HR increase, and hence, a safety issue for one cohort may be found.  We are now sailing, full speed ahead, into a new world of combining clinical, genomic, and healthcare data in ways that we could not have previously imagined. These new combining capabilities allow us to extract meaning from big data and turn it into smart data that can drive new therapies and advance clinical care. This is getting exciting!

Over the last few years we have all been inundated with the concepts of Big Data. There are 6,000 tweets from Twitter every second. Facebook stores 30 petabytes of user data. A Boeing Dreamliner...

Health Sciences

When Good Data Goes Bad by Srinivas Karri

Recently, the FDA published an article describing a notification it had issued to a contract research organization (CRO). The notification required certain bioequivalence studies to be repeated based on improper data collection and analysis processes. Bioequivalence studies, often conducted to bring generic drugs to market, establish that the generic drug has the same ‘effect’ as the original drug.  Obviously, these are critical trials, as they also establish the safety profile of the generic compound. The consequence of notification has been quite substantial.  It has been recommended that a number of generic drugs be removed from the market until accurate data is collected by repeating these trials, incurring significant, additional expense and impacting sales. According to an article in PulseToday.com, “European Medicines Agency (EMA) advisors said bioequivalence studies carried out on the drugs at Semler Research Centre in Bangalore were ‘flawed’ and ‘cannot be relied on...The EMA advisors concluded ‘the studies conducted at Semler cannot be accepted in marketing authorization applications in the European Union’ and, therefore, ‘no medicines can be approved on the basis of these studies’.” So what went wrong? Over the last decade organizations have had to become increasingly more stringent on their ability to collect and manage clinical trial data.  This has primarily been driven by the 21 CFR Part 11, GCP and GAMP regulations which describe methods and processes that need to be put in place to ensure that only authorized individuals have access to trial data and that adequate controls exist to prevent modification to data.  Clearly, at one CRO using spreadsheets to manage laboratory data, this method was insufficient.  There was clear evidence to show that by using spreadsheets to store and manage lab data, the CRO in question had manipulated data to create false results.   Fixing the problem Having looked at the announcements and recommendation, CRO should have deployed a data collection and management platform that would ensure that data could not have been improperly manipulated.  More specifically, the collected data should have been securely collected while allowing for downstream analysis and processing.  In this regard, Oracle Health Sciences Data Management Workbench (DMW) would have been ideal to support the CRO, and perhaps, should have been considered when building out its data management strategy. If you’d like a further description of our capabilities to support CROs and sponsors with building a data management platform see the attached for a set of capabilities and value statements for Oracle DMW. Srinivas Karri is Clinical Warehousing Cloud Strategy Director for Oracle Health Sciences. 

Recently, the FDA published an articledescribing a notification it had issued to a contract research organization (CRO). The notification required certain bioequivalence studies to be repeated based...

Life Sciences

Oh, That Looks Cool! – Riding the mHealth Wave

Innovation comes in many shapes and sizes from a unique user experience to game changing, disruptive, enterprise process changes. Innovation is something that's great to experience and deliver. It’s often detected by an initial emotional response typically triggered by some visual stimulus (or claim), which is somehow instantaneously converted to a real or perceived benefit. Conversely, innovation can become quickly blurred by hype, distracting us from the actual, realizable benefit. mHealth is a classic case of disruptive innovation in action. Thousands of vendors are active in this space from dot.com startups to mega-corps, such as Apple, IBM and Google. This is a clear indicator that‘something is happening’. But such huge market fragmentation is indicative of market confusion, ongoing pilot-itis, and a pre-cursor for mass vendor consolidation in the future. The promise of mHealth offers us all huge potential. The ability to capture real world data, direct from patients will be transformative. The ability to measure patient heart activity continuously via a sensor, analyze in stream, and enable a physician to consult remotely, is a significant change from today’s periodic, point in time model. Applying this to clinical trials, capturing this data can lead to reduced site visits through remote consultations, increased compliance to the drug studied, and most importantly, the collation of real world evidence to prove real outcomes.                     Geoffrey Moore’s marketing classic Crossing the Chasm illustrates the life cycle of new products. Once you have crossed the chasm you have a stable market that knows what it wants. Right now mHealth is certainly in the Early Adopter phase, and is likely to be there for the next couple of years, at least. But its potential “post chasm” is significant. From a strategy perspective it is clear that the pharmaceutical market is starting to emerge from a few, limited scope mHealth pilots to challenge regulatory barriers. Typically, pharmas, having tried a few devices and mobile apps, have seen both clear and fuzzy benefits. Now they are looking to scale. Oracle’ Health Sciences’ strategy is to build an enabling platform to Just Choose and Use. Pharmas should not be dependent on a plethora of disconnected vendors, such as managemydiabetes.com or neverfeelwheeszyagain.org, with associated proprietary apps and devices and multi-million dollar price points. For a Clinical Trial Program Lead it is critical simply to identify a desired device, plug it into a platform, and start a clinical trial with the comfort that data can be acquired,that it can be explored, and that it can be exploited downstream. Our strategy brings together the best of Oracle’s latest core tech PAAS services (such as Oracle Internet of Things Cloud Service) combined our best apps (Oracle Health Sciences InForm and Oracle Health Sciences Data Management Workbench), as well as our Big Data stack, to support a complete, end-to-end process from patient device acquisition.                                                                                              

Innovation comes in many shapes and sizes from a unique user experience to game changing, disruptive, enterprise process changes. Innovation is something that's great to experience and deliver. It’s...


Balanced Incentives and Healthcare Reimbursement

Healthcare payment reform is shifting again. At the rate of tectonic plates over the last 60 years, there have been several milestones that have utterly transformed the face of reimbursement. Diagnosis related Group (DRG) reimbursement, Managed Care capitation, and now the Medicare Access & CHIP Reauthorization Act (MACRA). MACRA will be a tsunami for Medicare reimbursement. With the Alternative Payment Model (APM) or the Merit based Incentive Program System (MIPS), reimbursement will be a synthesis of quality, technology, practice improvement, and cost reduction. What will MACRA accomplish? As with other historical reimbursement transformations, as Centers for Medicare & Medicaid (CMS) changes, commercial payers follow. For the MIPS alone, CMS will tie 30 percent (%) of their payments to performance outcomes by 2016, 50percdent (% ) by 2018, and commercial payers are targeting 75percent (%) of their payments by 2020. “Outcomes” involve a combination of clinical quality, resource use “cost”, health IT meaningful use, and clinical practice improvement activities (CPIA). MIMPS is the balanced approach. Currently 50 percent (%) of the performance weighted is based on quality and PQRS reporting. By 2021, the balance will be 30 percent (%) quality, 30 percent (%) cost reduction, 25 percent (%) technology usage, and 15% practice improvement. The goal is to ensure that participating providers are working collaboratively to transform the delivery of healthcare. CMS is using payment disincentives for those providers not able to meet these outcomes. With the macro level, CMS plan to have a net neutral impact for all Medicare providers on incentives; some will receive bonuses, some will not. Technology should always serve the healthcare provider, at the analytical level. MACRA is an organizational and multi-domain approach to transforming the reimbursement of care provided to those eligible for Medicare. This transcends to commercial payers and all patients due to process changes. How Technology and MACRA Will Transform the Industry Successful delivery of the MIPS activities above will be the synthesis of people, process, and technology: People to deliver the care, analyze patterns and establish quality improvement, care coordination and open access for those in the community, with technology as the critical infrastructure below both people and process. Actionable analytics and precise measurement to support reimbursement and margin analytics can only be attained if the full 360 degree view of the patient is available, consumed, de-duplicated, aggregated and delivered with the minimum amount of latency to the administrators, executives, and the care provider. Technology is necessary to support the business, with the right balance of infrastructure and “at the glass” costing and margin analytics, intersected with clinical and quality outcomes. Oracle and its partners are recognized for the right blend of technology and service offerings that deliver world class data aggregation and top in KLAS population health solutions for MACRA, delivery reform, and value based purchasing. Oracle Healthcare’s Offering Oracle’s Health Sciences has a staff of over 2500 professionals including physicians, PhDs, nurses, and clinical informaticists who guide the technical development and product offering for the healthcare market. Its Oracle Healthcare Foundation (OHF) is a unified healthcare analytics platform for data integration and warehousing providing clinical, financial, administrative, and omics modules. Building upon OHF, healthcare organizations can deploy pre-built business intelligence, analytic, data mining, and performance management applications from Oracle and its partners. These organizations can also leverage OHF’s out-of–the-box, self-service, analytics tools to build customized analytics applications.Currently OHF is implemented in such prestigious health systems as Los Angeles County Health Department, Penn Medicine, and Adventist Health System with 43 locations, and hundreds of individual hospitals participating. Oracle Healthcare Partners Oracle’s HSGBU has an active and expanding partner ecosystem that encourages pre-integration with top in KLAS population health vendors and Big 4 Healthcare advisory service providers. The partner ecosystem is active and mirrors the population health market to fulfill current and emerging business needs and include the full range of population health, evidence-based medicine, risk modeling, ACO, alternative reimbursement models, and value base contracting models (including MACRA). The following are two of the HSGBU’s partners that offer knowledge leadership and successful accomplishment in the MACRA and delivery reform space: Deloitte Deloitte is a Diamond-level partner and has been awarded Oracle’s highest honors for the past six years in a row. Additionally, Deloitte is the largest health care consultancy in the marketplace and a national and global leader when it comes to serving complex, multifaceted organizations. Analysts, including IDC, Gartner and Forrester, have both ranked Deloitte as a leader in EPM and Finance Transformation. Oracle and Deloitte have developed a “Triple Aim in the Box” solution in response to MACRA reform. Triple Aim in a Box combines Oracle’s packaged software applications providing a flexible, scalable platform and leading practice accelerators, with Deloitte’s implementation and integration services. Together, these apps and services enable clients to customize and configure a healthcare solution for their specific needs. This solution has a “start anywhere” approach that gives healthcare organizations the flexibility to advance their analytic capabilities across all three Triple Aim pillars, while prioritizing components that are critical to them. SCIO Health Analytics – Medicare Claim Analytics SCIO Health Analytics is one of the fastest growing technology companies in the US, having featured for four consecutive years on the INC5000 list and twice on the Deloitte Technology Fast500 list. In 2015, SCIO Health Analytics® Ranked as Major Player in IDC Health Insights’ 2015 Payer Analytics Marketscape Report. SCIO has been recognized by CMS with several CMS innovation awards focused on Medicaid members. Recently, SCIO was awarded the CMS Centennial Award for its relationship with the State of New Mexico. Learn more via our on demand webcast, MACRA, Analytics, and the Move from Volume to Value.

Healthcare payment reform is shifting again. At the rate of tectonic plates over the last 60 years, there have been several milestones that have utterly transformed the face of reimbursement....


MACRA is a “Game Changer”!

The new Medicare Access and CHIP Reauthorization Act (MACRA) Medicare payment law will change the healthcare payment delivery system for clinicians, providers, and plans in a very fundamental way. It will also drive the future of healthcare delivery. Though recent technology advances have generated many new systems and a wide variety of ever-increasing amounts of data, availability and access to the data has dropped at the same, alarming rate. Consolidation within the healthcare industry has only exacerbated the problem by creating organizations that have multiple clinical/financial/operational systems and as many reporting and analytic tools.  This “Perfect Storm” has driven the use of analytics in the healthcare industry. Improving the U.S. health system requires pursing three simultaneous aims: improving the experience of care, improving the health of populations, and reducing per capita costs.  Analytics are the key to achieving this Triple Aim approach. Oracle and Deloitte have developed a “Triple Aim in the Box” solution in response to MACRA reform.  Triple Aim in a Box combines Oracle’s packaged software applications providing a flexible, scalable platform and leading practice accelerators, with Deloitte’s implementation and integration services. Together, these apps and services enable clients to customize and configure a healthcare solution for their specific needs.  This solution has a “start anywhere” approach that gives healthcare organizations the flexibility to advance their analytic capabilities across all three Triple Aim pillars, while prioritizing components that are critical to them. Learn more via our on demand webcast, MACRA, Analytics, and the Move from Volume to Value.

The new Medicare Access and CHIP Reauthorization Act (MACRA) Medicare payment law will change the healthcare payment delivery system for clinicians, providers, and plans in a very fundamental way. It...

Life Sciences

Real World Data - A Key Factor for Pharmas and Payers by James Streeter

Today, real world data is becoming a key type of support, not only for pharmaceutical companies, but also for payer institutions. This kind of real world evidence enables both pharmas and payers to understand how drug products are actually working for patients in the real world.  For years, pharma companies have been buying data from many different sources to endorse the value of their drugs in the market.  Recently instead, payers have been demanding that drug companies provide real world evidence and real world data to back-up their product value claims and differentiate one drug from another ( when both are competing in the market as a remedy for the same, given condition).  Initially to meet this payer demand, pharma companies were collecting real world data from sources including: EMRs, mHealth/IOT devices, pharmacies, and many other health market environments. Much of these real world data types were fragmented, very difficult to link together, and differed in format (containing structured or unstructured data).  Many Oracle customers/partners have tried to collect and report on real world data from myriad sources, spending huge amounts of resources and time on the tasks.  The structured data in EMR systems didn’t always contain what was sought. Often, the sought data was located in an unstructured portion of an EMR system, which meant the data had to be parsed out to be found. In addition, data among various patients and different EMR systems was inconsistent, missing fields, and exposed poor data quality issues, due to lack of standardization. Although today the technology exists to gather, search, and report on data from many sources, there is an additional issue that greatly impacts our ability to move forward. EMR systems were not built to demonstrate pharma company use cases. They were designed to support billing for services and maintain the patient records.   The healthcare industry and the pharmaceutical industry must work together to change processes and develop standards in support of pharma company use cases. This can to provide the real world data that payers, patients, and regulatory agencies, alike, find so valuable. James Streeter is Oracle Health Sciences.Global Vice President, Life Sciences Product Strategy.

Today, real world data is becoming a key type of support, not only for pharmaceutical companies, but also for payer institutions. This kind of real world evidence enables both pharmas and payers to...


The Promise of DSRIP – The New Healthcare Delivery Reform Program

Healthcare delivery reform is not new. The 1960s saw the creation of Medicare. The 1970s and 1980s saw an attempt at managed care. As of 2010, we’ve had a state by state adoption of the Delivery System Reform Incentive Payment Program (DSRIP). Texas (TX), California (CA), and New York (NY) are the big states overhauling their Medicaid delivery by infusing primary care, behavioral health, and pediatric services into high risk communities to reduce avoidable hospitalizations and improve patient outcomes. Oracle and its partners are participating and investing in the NY DSRIP to implement the Medicaid Redesign Team (MRT) Waiver Amendment. According to the NY DSRIP FAQs, “DSRIP’s purpose is to fundamentally restructure the health care delivery system by reinvesting in the Medicaid program, with the primary goal of reducing avoidable hospital use by 25% over 5 years. Up to $6.42 billion dollars are allocated to this program with payouts based upon achieving predefined results in system transformation, clinical management and population health”. Avoidable hospital use is broad in application. It’s not just the mere readmission patients. Avoidable hospital use can also include the initial admission due or emergency room visit as a result of improper preventive and primary care. For New York, these four measures evaluate DSRIP’s success in reducing “avoidable hospital use” by at least 25percent (%): • Potentially Preventable Emergency Room Visits (PPVs) • Potentially Preventable Readmissions (PPRs)• Prevention Quality Indicators- Adult (PQIs) • Prevention Quality Indicators- Pediatric (PDIs) What DSRIP Will Accomplish New York, Texas and California have purposefully looked at the value equation of paying for healthcare. With DSRIP and other delivery reform initiatives, these states are leading the way in transforming a volume system to a system of integrated outcomes based care where that improves the quality of lives through access and primary care services. Healthcare is local and transformation has to start with the care provider for the patient in his/her community. It must deliver high quality and integrated primary, specialty, and behavioral services and reduce the unnecessary burden on emergent and acute services. There are many projects to engage in this delivery reform. DSRIP’s Domains 2, 3 and 4 are typically viewed as the highest value projects for the most Performing Provider Systems to accomplish DSRIP in New York State. - Domain 2: System Transformation Projects – Performing Provider Systems will meet the 2014 NCQA Level 3 standards by the end of Year 3 of DSRIP for Meaningful Use and Patient-Centered Medical Homes. It includes multi-discipline care, such as primary care, behavioral health, long term care, and community provider services. - Domain 3: Clinical Improvement Projects – These include at least one behavioral health program - Domain 4: Population-wide Projects – These cannot duplicate Domain 3 and are based on the New York State Prevention Agenda. Solving the Problem with Technology DSRIP is a multidisciplinary approach to transforming the care delivery to those highest at risk. Successful delivery of the domain projects above will be the synthesis of people, process, and technology. People will deliver the care, analyze patterns, and set a new process for the critical infrastructure of community- technology engagement. Actionable analytics and precise measurement can only be attained if the full 360 degree view of the citizen is available, consumed, de-duplicated, aggregated, and delivered with a minimum amount of latency to the care team, care coordinators, administrators, and executives. Technology is necessary to support the business with the right balance of infrastructure and population health analytics. Oracle and its partners are recognized for the right blend of technology and service offerings that deliver world class data aggregation and top in KLAS population health solutions for DSRIP, delivery reform, and value based purchasing. Oracle Health Sciences has a staff of over 2500 professionals including physicians, PhDs, nurses, and clinical informaticists who guide the technical development and product offering for the healthcare market. Its Oracle Healthcare Foundation (OHF) is a unified healthcare analytics platform for data integration and warehousing providing clinical, financial, administrative, and omics modules. Building upon OHF, healthcare organizations can deploy pre-built business intelligence, analytic, data mining, and performance management applications from Oracle and its partners. These organizations can also leverage OHF’s out-of–the-box, self-service, analytics tools to build customized analytics applications. Oracle’s HSGBU has an active and expanding partner ecosystem that encourages pre-integration with top in KLAS population health vendors and Big 4 Healthcare advisory service providers. The partner ecosystem is active, mirrors the population health market and includes the full range of population health, evidence-based medicine, risk modeling, ACO, alternative reimbursement models, and value base contracting models (including Medicare Access & CHIP Reauthorization Act, or MACRA). Following are some HSGBU partners that have developed successful accomplishments in the DSRIP and delivery reform space: KPMG, a leading Big 4 organization, provides unrivalled experience and deep domain knowledge with the NY DSRIP program. A team of over 125 KPMG professionals has been deployed with the New York State Department of Health (NYSDOH) since June 2014 on DSRIP planning and implementation. The team provides implementation strategies for value-based performance It also develops data and analytics and information management infrastructure, methods, and capacity. Forward Health Group  is a leader in Population Health Management. It provides trusted quality improvement technology created by the American Heart Association, American Diabetes Association and the American Cancer Society. FHG is also the highest rated and scoring vendor in KLAS’ 2015 Population Health Management Report, winning out over IBM/Phytel, Health Catalyst, Epic and the 20 other companies in the report. ENLI Health Intelligence"s population health solution (Care Coordination & Care Management) used by health systems, ACOs, health plans, and physician practices, leverages data via OHF to improve quality scores and total cost of care. Population health Solution s is recognized by industry analysts, including KLAS, IDC Health Insights, and Frost & Sullivan, and Chilmark as a top-rated population health management tool. SpectraMedix offers real-time and predictive data analytics for improved care quality, optimized financial goals, and successful transitions to performance-based/ risk-sharing payment models. It offers a platform for inpatient and ambulatory regulatory programs for quality reporting, near real-time care delivery analytics, a DSRIP program implementation solution, population health management tools, and predictive modeling and at-risk patient surveillance. To learn more about Oracle’s data warehouse for health care organizations, Oracle Healthcare Foundation, click here.

Healthcare delivery reform is not new. The 1960s saw the creation of Medicare. The 1970s and 1980s saw an attempt at managed care. As of 2010, we’ve had a state by state adoption of the Delivery...


Oracle’s Wanmei Ou, Ph.D. Named to HHS Health Information Technology Standards Committee

TheU.S. Department of Health and Human Services (HHS) Secretary Sylvia M. Burwellhas recently appointed Wanmei Ou, Ph. D., Director, Oracle’s ProductionStrategy in Translational & Precision Medicine*, as a member of the HealthInformation Technology Standards Committee (HITSC). Helping to support and enable President Obama’sPrecision Medicine Initiative, Dr. Ou, along with other newly announced HITSC committee members, is charged withmaking recommendations on standards, implementation specifications, andcertification criteria for the electronic exchange and use of healthinformation. “At our first face-to-facemeeting, I was very impressed with committee members’ credentials and passionin relation to building the Healthcare Information Exchange, improving carequality, and enhancing patient engagement. I look forward to contributing myexpertise in precision medicine to optimize our population health services,”said Dr. Ou. Created along with the HHS’ Health InformationTechnology Policy Committee (HITPC).,theHITSC reflects a broad range of stakeholders, including providers,ancillary healthcare workers, consumers, purchasers, health plans, technologyvendors, researchers, relevant federal agencies, and individuals with technicalexpertise on health care quality, privacy and security, and on the electronicexchange and use of health information. Together these two committees offer the opportunity forstakeholders and the public to provide direct input to HHS on theimplementation and use of health IT.  Dr. Ou’s background and expertise will surely provean asset to the HITSC. ### * Wanmei Ou is Director in Translational and Precision Medicine, OracleHealth Sciences. She is a medical informatics leader with over 12 years of experiencein business strategy, R&D, product management, and project implementation. With in-depth technical knowledge of statistics, central nervous systemmedical imaging, patient stratification for cost-effectiveness research, andgenomic analysis across several therapeutic areas, she works with Oracle customersto understand their requirements and transform them into development activitiesshaping the product functions. She also works closely with interoperabilitystandards committees to ensure the final product can integrate easily with thecustomers’ other existing solutions. Wanmeiobtained a Ph.D. degree from Massachusetts Institute of Technology with focuson biomedical imaging, during which she worked closely with clinicians at bothMassachusetts General Hospital and Brigham and Women’s Hospital. She alsoreceived a National Science Foundation fellowship during her graduate study.Her previous research experiences include Siemens Corporate Research where shereceived a U.S. patent on her work on fast multimodal image registration andprogrammer analyst at the New York Blood Center focusing on populationgenetics.

The U.S. Department of Health and Human Services (HHS) Secretary Sylvia M. Burwell has recently appointed Wanmei Ou, Ph. D., Director, Oracle’s ProductionStrategy in Translational & Precision...


Oracle’s Dr. Jonathan Sheldon Appointed to DIA Board of Directors

The Drug Information Association(DIA) has just appointed Jonathan Sheldon, Ph.D., Global VicePresident, Healthcare, Oracle Health Sciences*, to its Board of Directors for athree year term. Founded in 1964, DIA is an international, nonprofit,multidisciplinary association that fosters innovation for improved health andwell-being worldwide.  The association offersknowledge resources and forums on product development, regulatory science, andtherapeutic innovation for professionals in the pharmaceutical, biotechnology,and medical device communities. Currently, fostering innovationfor improved worldwide health is clearly focused on Precision Medicine initiatives-- both in the US and around the world. Precision Medicine requiressophisticated, shared, scalable, and secure molecular data analysiscapabilities. Dr. Sheldon, with his expertise in areas of translational medicine and personalized healthcare, wasappointed to the DIA Board to share his insights on precision medicine analytics,healthcare innovation, and data knowledge-sharing/organizing distribution on aglobal scale. “Today, Precision Medicine is thenew watch-phrase in medical research and practice. Playing an ever-larger rolein drug development and healthcare outcomes, it can help DIA to advance itsvision of improved global health through innovation,” said Dr. Sheldon. “PrecisionMedicine provides the molecular tools to stratify patients in a way thatoptimizes trial design, and therefore, improves chances for clinical success.DIA, as a global health knowledge exchange, can advance the innovationsassociated with Precision Medicine through its forums and resources with thehopeful result of saving more lives across the world. “ As part of his responsibilities, Dr. Sheldon, along all board members,will help the DIA to: Define the future innovative offerings deliveredthrough DIA’s information platform for new stakeholder value. Sustain/enhance DIA’s globalization model forengagement/escalation with appropriate financial and non-financial milestones. Re-imagine the DIA governance model and createthe transition plan for the future. Create a strategic risk map, aligned to majorpotential activities impacting business continuity and establish mitigationstrategies to prepare for these risks. Support DIA’s vision via strategic relationshipswith visionary, like-minded organizations to multiply DIA’s impact on health. Dr. Sheldon is sure to add a relevant, timely, and valuableperspective to the DIA Board. ### * Jonathan Sheldon, Ph.D., isGlobal Vice President Healthcare. He is responsible for Oracle’s healthcarebusiness globally within the Health Sciences Global Business Unit, whichincludes solutions in the areas of precision medicine, population health, and convergence with LifeSciences. Previously, Dr. Sheldon has held positions of both Chief Scientific Officerand Chief Technology Officer in software companies serving both life scienceand healthcare sectors where he has lead the company's overall strategicdirection. Prior to the software business, he also established the firstbioinformatics group and was Head of Bioinformatics for five years at Roche(UK) Pharmaceuticals. Dr. Sheldon has served on various advisory groupsincluding the Board of Directors of the Transmart Foundation.

The Drug Information Association (DIA) has just appointed Jonathan Sheldon, Ph.D., Global Vice President, Healthcare, Oracle Health Sciences*, to its Board of Directors for a three year term. Founded...

Health Sciences

mHealth, IoT, Clinical Trials, and the Next Four Years

Today, a convergence of factors --  the proliferation of consumer wearables and medical sensors combined with advances in platform technologies (Internet of Things (IoT) -- are  delivering new mHealth capabilities that can radically change the way clinical trials are done. The digitization of clinical data and associated processes help research teams collect real world data for deeper insight into new drug response, for safer, more accelerated clinical trials, and even for potential, new digital therapies.  In a recent survey of drug development organizations, 80 percent (%) said that, though mHealth was in its early stages, it has already begun enhancing clinical trials. Respondents cited the following as the biggest benefits of mHealth: Improved data quality -35.2% Improved patient engagement - 28.5% Improved early safety signal detection - 17.2% Improved patient recruitment  -12.3% Improved patient trial adherence - 12.3% Improved sponsor CRO-to-site communication  - 6.6%   Also, 79 percent (%) of those asked thought that mobile devices monitoring conditions associated with a specific disease-related clinical trial would be the most effective.  Trending: The mHealth Market By 2020, the mHealth market (today valued at $489 million) is set to explode to over $50 billion. According to Accenture, this explosion, coupled with the proliferation of IoT apps, will drive the emergence of a new digital health solutions market (converging diagnostic, monitoring, and therapeutic solutions). This new market is expected to save the U.S healthcare system over $100 Billion in the next four years.  According to Leslie Kux, FDA Associate Commissioner for Policy, “Use of these [mHealth] technologies allows for more flexibility for the sponsor and clinical investigator in the oversight of clinical investigation conduct, data collection, and monitoring of trial participants and clinical sites.”  What’s Here and What’s Next Like many emerging markets there are a plethora of vendors, each one offering his/her unique view on a specific, therapeutic niche. Excellent mHealth solutions exist for therapeutic areas such as (chronic obstructive pulmonary disease (COPD), diabetes, and hypertension, each leveraging sensors, such as spirometers, blood glucose monitors, and blood pressure monitors. However, as organizations look to scale these approaches across all trials and all therapeutic areas, there is a need for an enterprise platform approach to allow rapid adoption of the right sensor for a specific disease type, without the need for lengthy, complex, new vendor negotiations or large IT integration efforts. Looking ahead in mHealth, Oracle is extending its current clinical platform by adding a secure, device-agnostic, gateway that can capture data from any sensor for seamless flow into downstream clinical applications -- such as Oracle Health Sciences Data Management Workbench (DMW) -- for review and analyses in real time. Currently, there are signs of mHealth market growth advancing towards 2020 levels. Mobile devices/sensors are enabling continuous monitoring with ever-higher data volumes, Bluetooth is part of everyday life, and interfaces between platforms/devices are becoming more advanced. Some examples of current mHealth devices include: Scanadu Scout- A Star Trek Tricorder-like, mobile device held to one’s forehead to measure heart rate, temperature, blood pressure, oxygen level and provide a complete ECG reading. It then transmits that data to a mobile device via Bluetooth. Vivonetics Smart Sensors/Wearables- Continuous wearable physiological monitors, data services, and software for collecting analyzing reporting physiological research data. Novartis/Google Smart Contact Lenses – A contact lens designed to help restore the eye's natural autofocus.   Current Barriers Though the future for mHealth is promising, there are still some key challenges, such as data privacy, security, and general regulatory concerns. These issues will continue to evolve and be resolved, often leveraging approaches/technologies from other industries, such as banking or defense. Though the explosion of sensor technology will deliver great new opportunities for direct measurement, medical grade sensors will remain a small proportion of those delivering statistically validated data points. Additionally, the opportunity to gather large volumes of continuous data will need to be balanced against the ROI to implement and deploy new big data analysis and storage platforms. Finally, organizations must be willing to adopt to change. This means a world in which data can be captured in real time, remote monitoring can reduce/remove patient/site visits, and new statistical methods may need to be developed. Clinical Program leads must take the brave leap to embrace this new world for their research projects, as the rewards could be great. 2020 With all this said, these obstacles will be overcome. By2020, the accuracy and validity of off-the-shelf sensors/devices will have increased, and patient use of mobile devices will become routine. Physicians will use digital data as a standard for patient evaluation. Devices will become increasingly less invasive and ever more ‘invisible’ to the patient. Usage of mHealth devices in clinical trials will be the expected norm. As mHealth becomes the dominant data collection and engagement channel with the patient, a new class of digital therapies will emerge that help manage the patient’s condition, rather than simply treat them with a pill. With greater than 50 billion connected devices around the world, mHealth will be driven by an unprecedented force of consumer acceptance. Real world data will continue to create real world evidence, and prove real world outcomes.   The digitization of the clinical trial is truly on! Learn more about Oracle and mHealth. Visit Booth #1125 at DIA16  

Today, a convergence of factors --  the proliferation of consumer wearables and medical sensors combined with advances in platform technologies (Internet of Things (IoT) -- are  delivering new mHealth...

Health Sciences

Data Mining at the FDA and Oracle

Since the early 1990s, the FDA has used data mining (the practice of examining large databases to generate new information) to gain a better understanding of adverse signals within clinical trial safety data. The agency has also advocated the use of data mining by the pharmaceutical industry when conducting clinical trials. Recently, the FDA has begun adding more sophisticated methods to its data mining activities and has applied these methods to other product-safety-related databases. In an effort to define its data mining activities, the FDA has issued a white paper, Data Mining at FDA (Nov 2015). It provides an overview of its past and present data mining methods, the advantages/challenges of data mining and future directions for data mining at the FDA. In relation to past and current data mining methods, the paper describes the FDA’s Center for Drug Evaluation and Research (CDER) application of software solutions -- including Oracle Health Sciences Empirica Study -- to analyze clinical trial data for new drug applications or supplemental applications. The paper also outlines how the FDA uses Oracle Health Sciences Empirica Signal for the routine mining of data related to drugs, foods, cosmetics, and dietary supplements. In regard to future directions for data mining, the FDA, using Oracle Health Sciences Empirica Signal and Study, is in the early stages of developing an automated data mining capability. When completed, this capability would nearly eliminate the need for any additional labor resources.  Dr. Sameer Thapar, Director, Global Pharmacovigilance (Oracle Health Sciences Consulting), and Assistant Professor, Drug Safety and Pharmacovigilance (Rutgers University), has studied this paper and offered highlights in a new Oracle Brief.  Read it here. Read the FDA white paper Data Mining at FDA Learn more about Oracle's solutions data mining -- at the FDA and for your clinical trials -- Booth #1125 DIA16.

Since the early 1990s, the FDA has used data mining (the practice of examining large databases to generate new information) to gain a better understanding of adverse signals within clinical trial...

Health Sciences

Data Management & Mining Approaches for Translational Medicine

Health systems are facing a number of challenges in thecost-effective delivery of healthcare with aging populations and a number ofdiseases such as obesity, cancer, and diabetes increasing in prevalence. At thesame time, the life sciences industry is also faced with historically lowproductivity and a dearth of new drugs to replace medicines reaching loss ofexclusivity. Translational Medicine has emerged as a science that canhelp tackle these challenges. The move towards electronic medical records inhealth systems has provided a rich source of new data for conducting researchinto the pathophysiology of disease. Increasingly, it is understood that notall drugs work the same in all patients, and tailoring the right drug, to theright patient, at the right time, will help improve medical outcomes, whilealso reducing the cost associated with mis-treatment or over-treatment. Key toachieving this is the use of new, molecular, diagnostic techniques, such asnext generation sequencing, which can help scientists and clinicians understandthe pathophysiology of disease and also identify which drugs will work in whichpatients. In a chapter entitled "Advancements in Data Management& Data Mining Approaches" published in Translational Medicine: ToolsAnd Techniques by Elsevier, we outline a data management framework which can beused to integrate and analyze clinical data from medical records or clinicaltrials and molecular data from new sequencing technologies. The use ofdifferent data integration platforms is discussed. Also discussed are approachesto how these data integration platforms can be used as a backbone for datamining. Best practices in data mining are described, and common techniques usedin biomedical research are introduced with some use caseexamples.            Read the chapter here 

Health systems are facing a number of challenges in the cost-effective delivery of healthcare with aging populations and a number ofdiseases such as obesity, cancer, and diabetes increasing...


Clinical Genomic Testing: What’s the Total Cost?

Manypanels and publications note the cost of genomic sequencing has been reduced to ≈ $1,000 (down from $100 million 15years ago) thanks to next generation sequencing (NGS) technology. However, thoughthere’s been substantial reduction in sequencing cost, genomic testing has notbecome standard practice in daily clinical care. Aftersome investigation, I learned that the total cost of clinical genomic testingis much more than the initial $1000 quoted. The complete process includes alarge number of costs often omitted in the initial estimate. Let’s start withthe familiar components. The costof the wet lab component (use of a sequencing machine, the library, automationequipment, lab technicians’ time, etc.), is a given. This is the set ofprocesses greatly reduced in cost due to NGS technology. The next component isthe hardware and software used to analyze the sequencing machine’s output todetect DNA variants and perform Quality Control (QC) testing. This componenthas been largely simplified thanks to cloud computing and storage technology. (Withthis technology, a lab doesn’t need to make a large up-front informaticsinvestment. Instead, it can scale its investment, as the lab grows.) Thenext few components are those that demand steady, sometimes increasing costsover time. Sometimes, these following components are omitted completely during planning. Variant classification and clinical interpretation. Due to the nature of NGS technology, a genomic test yields many more variants than those related to a patient’s medical condition. As a result, one needs to carefully identify, and then clinically interpret, the pathogenic variants. Although public and proprietary variant knowledge databases are emerging to reduce the manual effort, current classification/interpretation processes are likely to remain hands-on due to (1) incomplete knowledge databases, (2) rapidly changing science, and (3) the large number of novel variants identified in genomic tests. Positive biomarker confirmation. Due to the false positive/negative rate in NGS technology and in analysis pipelines, many clinical laboratories include Sanger sequencing (an FDA-approved method) to confirm positive findings in each patient case. The cost associated with the confirmation test should be added into the overall price of clinical genomic testing. Associated medical intervention. It’s important to separate medical intervention resulting from genomic testing in the context of a patient’s current diagnosis, from medical intervention related to the new and incidental findings (resulting from the genomic test, itself). Some examples of the former could be the decision to switch a patient to a more targeted medication (based on the detected variants in the patient’s tumor) and the decision to adjust medication dosage (based on the patient’s genotype). On the other hand, if the genomic test uncovers an incidental finding with deleterious variants in a patient’s BRAC1 gene, the associated medical intervention -- such a mastectomy or an increased frequency of a mammogram, -- should be added to the overall cost. Informed consent and follow-up activities. Due to emerging genomic testing technology and the dynamics of regulatory oversight, many providers and reference laboratories have to spend resources on consent contents and patient education. For example, contents andeducation related to how to handle incidental findings are brand new concepts in medical practice. Basedon laboratories’ and hospitals’ standards of practice, the genomic test data may get re-interpreted,as new knowledge becomes available. The re-interpretationmay trigger additional, follow-up medical interventions, based on both the patient’s condition and the incidentalfindings. Theseare some of the “hidden” costs in clinical genomic testing. I amsure there are more items that have yet to be uncovered. Inorder to truly understand the total cost of clinical genomic testing, NIHshould give serious consideration to allocating resources to study thisimportant topic.

Many panels and publications note the cost of genomic sequencing has been reduced to ≈ $1,000 (down from $100 million 15years ago) thanks to next generation sequencing (NGS) technology. However,...


Forward Health Group’s PopulationManager® Integrated with Oracle Enterprise Healthcare Analytics Addresses Healthcare Providers' Population Health Challenges in Managing Risk and Moving to Value-Based Care

With data analysisand identification of opportunities for patient outcome improvement, ForwardHealth Group (FHG) enables healthsystems, hospitals, and multi-specialty clinics to transition into the newworld of value-based care. Through itssolutions, FHG supports the qualityof care, the well-being of patients, and the financial health of providerorganizations.  A GoldLevel member of the Oracle PartnerNetwork (OPN), FHG today announced that itsPopulationManager® application hasachieved Oracle Validated Integration status with Oracle Enterprise HealthcareAnalytics, a suite of advanced data warehousing and analytics solutionsdesigned for healthcare organizations. Forward HealthGroup rapidly cleans, normalizes, and aligns data from many disparate sourcesvia Oracle Enterprise Healthcare Analytics. The information is then presentedthrough FHG’s Population Manager® platform to physicians, care teams, andadministrators in an elegant, easy to use, data visualization format that isactionable from day one. With a fast forward approach to populationhealth management, FHG works with clients to identify focus areas that willhave the most impact on specific improvement goals. Hypertension? Diabetes? Identifymost at-risk patients? Reduce readmissions? Clients get results—andpayback—fast. Clients then continue to add other areas of focus until all appropriategoals and measures are covered. “The combination of Oracle’s enterpriseclass data management solution and our population health management toolkit ispowerful,” said Michael Barbouche, Founder and CEO of Forward Health Group. “Oracle is a global leader for all things data. AchievingOracle Validated Integration highlights the importance of putting accurate,trustworthy data in the hands of providers. With the rapid changes inreimbursement, the advent of precision medicine and the shift to a preventivehealth/population health focus, better data has never been more important.” ”The combination of Oracle and ForwardHealth Group is a compelling value proposition for our mutual, healthcareprovider customers by delivering better care and lower cost outcomes. Thecompany is delivering best of breed, population health applications andanalytics on top of our enterprise clinical, financial, and operational dataaggregation platform,” said Jonathan Sheldon, vice president, Global HealthcareProduct Strategy, Oracle. “We haveselected Forward Health Group as a result of our rigorous due diligenceprocess, as well as its market leading position and deep domain expertise inthe growing population health solutions segment. Together, we help healthcareorganizations address their key challenges, while gaining control of disparateenterprise data sources and implementing best practices healthcare datagovernance.” Read the press release here http://bit.ly/1RkvRpf Visiting HIMSS 2016 in Orlando, March 1-3? Stop by Forward Health Group’s Booth #2477 and Oracle Health Care Booth #7700 andPopulation Health Knowledge Center Kiosk #14106

With data analysis and identification of opportunities for patient outcome improvement, Forward Health Group (FHG) enables healthsystems, hospitals, and multi-specialty clinics to transition into the...


Healthcare Inefficiency Through the Eyes of a Teenager

I have a teenage son and he knows what I do. Seriously, wetalk enterprise healthcare analytics, healthcare policy, Obamacare, and even payermix analysis. He’s paid attention to my career, which is a good thing. It’s provided for his lifestyle. But he has thepatience of a two-year old when the annual school physical arrives. I enjoy the Pediatrics & Adolescent Clinic that looksafter him for more than eight years. It’s a PCMH level 3 facility with Saturdayhours and a walk-in clinic with Monday –Friday 7a-7p hours. My son doesn’tenjoy it. He doesn’t suffer the wait well. At last year’s visit, as wesequenced through the gates, checked in, waited, gave vitals, waited, had aconversation with the nurse, waited, then saw the physician. My son looked at me in the exam room --complete with Nemoand Ariel on the walls -- and said, “You can fix this!” “Whoa!” I explained to him how great this clinic is and how ittreats the whole child during the appointment. The clinic’s staff reallyworks to maximize the visit and not make the patient and his/her family returna separate day for the vaccine, or come back a third time for a medical check-up. It’s really progressive – and thisis Texas - we still are a fee for service market. But this clinic is focused oncreating wellness and ensuring the patient’s needs are met. Far too often,patients leave with follow up instructions only to disappear because ofunforeseen scheduling issues or a patient/parent decision to be non-compliant. Again, that’s why this is a great clinic. Its staff reallycares for the patient. But that fell on my son’s deaf, teenage ears stilltrying to figure out why he needs to sit next to Nemo. Again, he said “You canfix this. You have the systems that know how many things I need to have done. Sowhy only book a 10 or 15 minute visit” (Yes, we have talked about the CMS reimbursement model foroffice visits). And, yes, he is right. We have the systems to “queue” openitems and the PCMH’s that work to maximize the visit. But unfortunately ourreimbursement model forces an arbitrary schedule; and all too often providersdefer secondary complaints to the “next visit” so they can stay on schedule. As the ACO @ riskmodel gains momentum coast to coast, I am hopeful that future analyticdiscussions will move beyond volumes and counts of descriptive analytics, aswell as advance to queue patient visits and algorithmically organizeappointments. Join me in this discussion #AcceleratingtheEvolutionofHumanCare

I have a teenage son and he knows what I do. Seriously, we talk enterprise healthcare analytics, healthcare policy, Obamacare, and even payer mix analysis. He’s paid attention to my career, which is a...


Accelerating the Evolution of Human Care

Accelerating the Evolution of Human care? That is a BoldStatement -- big and bold. Oracle is Number One in healthcare and bold enoughto commit to supporting our healthcare partners – hospitals, provider groups,payers, pharma, and research organizations. We Connect. We Collaborate. We Care. Earlier this year, I presented at the 15th Population HealthColloquium in Philadelphia. In just a few short years, since 2010, we havecrossed the overwhelming challenge of integrating EMRs into the hospital. In2015, we are discussing Population Health – and all its components -- technology,data, patient engagement, care coordination, evidence-based medicine, digitalcare caps, and benchmarking! It’s Christmas and we are unwrapping all thepresents. And, here’s the bonus. Our"gifters" included batteries – the clinical data! We have everything readyto go! For me, I was raised in Population Health on the west coastin the late 1990s. I learned it through care teams and evidence based medicine.Do the right thing for the patient and outcomes will improve. Utilization willmove from in-patient admissions to outpatient visits. Today, Population Health is so wide. Some programs eveninclude expansive social safety gaps of job sourcing and housing. Regardless ofyour definition, however wide or narrow, Population Health is about the patientand the provider. It is about ensuring that as industry professionals in large healthcareorganizations, we are providing the patient with the right thing, at the righttime, in the right place, and at the right level of services. So let’s get back to the big and bold statement, “Acceleratingthe Evolution of Human Care”. Oracle is uniquely positioned as the marketleader in enterprise solutions. Our “Number One in Healthcare” label reinforcesthe idea that our solutions support technology, data, and queuing theory withour databases, enterprise healthcare analytics data model, enterpriseperformance management, and connected health. We are the champion ofpatient-centered, human care. Join me in thisdiscussion #AcceleratingtheEvolutionofHumanCare

Accelerating the Evolution of Human care? That is a Bold Statement -- big and bold. Oracle is Number One in healthcare and bold enoughto commit to supporting our healthcare partners –...


Please press "1" to confirm

Howmany of us receive text messages from our hair salon to confirm the hair cutfor tomorrow? How many of us receive robo-calls on Saturdays and Sundaysreminding us that Johnny has an orthodontic visit at 7AM on Monday? Are these a nuisance? Or are they methods ofensuring that appointment slots are filled, revenue is maximized, and clientsatisfaction is not impeded? Ibalance the slight annoyance of having to say, “Yes, I’ll be there,” orpressing “1” to confirm, with the bigger concern. If I had a provider or astylist that was so poorly managed that his/her schedule left time slots unfilled,then I would have to wait six weeks for an appointment. Recently,Britain’s National Health Service issued a sanction on one of its facilitiesbecause of poor staffing ratios and extended wait times. This was not uncommonin the single payer environment nor really any environment in which there wereno financial constraints managing the appointment queuing process. Singlepayers figure, “If there is no penalty for a no show, then, hey, I’ll schedule threeappointments and I’ll go to the one that works for me.” Not only did thiscreate unexpected capacity for services (there were providers, but the patientsdidn’t show up), multiplied the already taxed patient waiting process, but alsoincreased overall patient dissatisfaction. If I am told that the next appointment is in sixmonths, but I know there are no shows, I’ll march myself to the provider andwait. Yep, wait. Oh yeah, I’m taking a chair in the waiting area, a parkingspot in the parking lot, and I’ve taken off from work. Thisblog is not a conversation on the decisions that led to the run up withBritain’s issue, but rather a discussion on how we as participants, consumers, andpatients have a certain level of service expectation. I don’t care if myprovider is a commercial provider with a $50 deductible or a military nursepractitioner whom I saw when I was in the service 16 years ago. Ihave an expectation that the services and the analytics will facilitate myprovider making the best decision for me at the point of service. And I have anexpectation that he/she will manage his/her patients and appointment queuesefficiently. I’ll endure the press “1” to confirm the known guarantee that Ihave an appointment and that it is timely. Sothat brings the discussion around to population health and patient centric care.Thismust be so not just in a marketingcampaign for a “Patient-Centered Medical Home,” but truly thinking about theanalytics that support the patient, from queueing theory, to primary careassignments, to care coordination follow ups and to post admissionrehabilitation. Systems,healthcare organizations, payers, providers, intermediaries, states, counties, andnational governments should adjust their view to enable populationhealth analytics. These statistics can be aggregated and drive these systems from the individual patient level up to the system level. Joinme in this discussion #AcceleratingtheEvolutionofHumanCare

How many of us receive text messages from our hair salon to confirm the hair cut for tomorrow? How many of us receive robo-calls on Saturdays and Sundaysreminding us that Johnny has an orthodontic...

Health Sciences

Healthcare Behind the Scenes: The Value of Infrastructure

My father is a civil engineer who has spent a careerbuilding public works projects like roads, pipelines, and water treatmentplants throughout the southeastern US. During my childhood, he often had totravel to visit rural job sites to check on his company’s projects. He wouldsometimes take me along so we could spend time together. I can’t say that weekend trips to inspect sewage treatmentplants were memorably exciting, but it was family time well spent. As I’vegrown in life and built a career in medicine, those experiences have given me agreat appreciation for something many of us take for granted – infrastructure. Infrastructure enables the lives we lead. The electricitygrids that power our alarm clocks, the water systems that fuel our morningshowers, and the roads/tracks that facilitate our morning commutes rarelyinspire fascination and appreciation. Of course, if theymalfunction, we know it immediately and complain about it vehemently. We rarelyknow who even built the infrastructure, and can only complain to theadministrative bodies like the power companies, water boards, and localgovernments with demands to “Fix it, and fix it fast!” Information infrastructures are being designed and built ata dizzying pace in healthcare these days. Some organizations are better thanothers at developing and maintaining that infrastructure, though nearly all arein a constant state of fluid growth. Healthcare organizations share animportant functional mission, though – their infrastructure is necessary toimprove lives. Failure of that infrastructure can harm lives, sometimesirreparably. The intensity of focus on healthcare information infrastructure bythose who construct and maintain it is, at times, underappreciated.Nevertheless, successful creation of infrastructure can lead to more reliablemaintenance and smoother operations throughout the organization. The imperative at Oracle to improve time to value and createconsistent, reliable infrastructure to manage data reminds me of those oldtrips with my father. End users rarely have visibility into the reliability ofservers or the consistency of data within a database. However, their userexperience will be poor if those elements of infrastructure are shaky. Infrastructure isn’t glamorous, but it’s necessary. A sounddata platform strategy with a reliably invisible information infrastructure canwork wonders for organizations looking to survive and thrive during the ongoingrevolution in healthcare delivery. The next time you drive down a scenic highway, admiring thetwists and turns of the road and the smooth drive underneath, take a moment toappreciate those who designed and built that road. Consider also a similarappreciation for the information highways that enable our healthcare journey,and the infrastructure that makes the journey smoother.

My father is a civil engineer who has spent a career building public works projects like roads, pipelines, and water treatmentplants throughout the southeastern US. During my childhood, he often had...

Health Sciences

Medical Device Industry in the New Bundled Payment Era

The U.S. medicaldevice industry, the largest and most innovative segment of this industryworldwide, has a market size of around $110 billion and is expected to reach$133 billion by 2016.1 However,a requirement from the Centers for Medicare and Medicaid Services (CMS)for a newly devised bundled payments for hospitals requires proof of value formedical device implants. This new mandate to demonstrate value is coming as abig surprise to the medical device industry. The first mandatedbundled payment will be designated for hip and knee replacement, the ComprehensiveCare for Joint Replacement (JJCR) model. Under this mandate, starting from April1st 2016, hospitals in 67 geographic areas will receive a bundledreimbursement. This lump sum will cover all hip and knee replacement procedures,as well as the 90-day post-procedure admissions and services requirementsneeded for each patient. 2 Although CMS will payeach hospital according to the hospital’s set bundled price -- which can be adjustedannually by each hospital -- CMS will compare clinical outcomes and costsacross hospitals. At year end, CMS will reward hospitals with superior outcomesand low costs and penalize those with bad outcomes and high costs. The impact of thismandate not only influences healthcare providers’ orthopedic service lines, butit also affects joint implant medical device companies. Many medical devicecompanies are collecting data directly from patients to demonstrate thesuperior clinical outcomes and cost-effectiveness of their devices. It is anticipated that data collection withinthe JJCR model will become the first testimony for “device readiness”. Payment for implanteddevices will be part of the bundled payment under the mandate. In order to competein the marketplace, a medical device company will need to show the value of itsimplant to healthcare providers. Some of the critical questions that it willneed to answer to prove this value include: How complex is the surgery to implant our device vs. competitors’ devices? What is the success rate of the surgery using our device vs. competitors’ devices? What is the cost associated with re-admissions due to infection or surgical failure using our device vs. competitors’ devices? Although medicaldevice companies historically have been using purchased claims data to get someof the information above, data addressing the “value” question, which isdirectly linked to clinical outcome information, is commonly absent in theclaims data. Some patient registries provide limited information, but it is insufficientto answer the “value” questions. Device companies also need patient reportedoutcomes and wearable sensory data to augment the missing information.  There is a new “proofof value” race stirring in the medical device industry. Watch here for more updates! 1. Select USA , The MedicalDevice Industry in the United States, http://1.usa.gov/XDNb1k 2. Centersfor Medicare and Medicaid Services https://innovation.cms.gov/initiatives/cjr Learn how Oracle  is Powering Healthcare @#PMWC16 Jan 24-27  http://bit.ly/1K7MmkE

The U.S. medical device industry, the largest and most innovative segment of this industry worldwide, has a market size of around $110 billion and is expected to reach $133 billion by 2016.1 However,a...

Health Sciences

Sifting Through Code: Life Meets Technology

Adenine(A), cytosine(C), thymine(T), and guanine(G), foursimple base pairs, encode life as we know it, and provide the basis for theDNA to create and mold our lives. A single human cell is more complex and intricate than anymachine built by man. Yet , the simplicity of A, C, T, and G offers hope thatwe can unravel the mysteries hidden within our genetic make-up. Identifying our genetic make-up is only the beginning, as itrequires generating foundational data to understand the origin of who we are.Nonetheless, we are in the unenviable position of having to reverse engineerourselves to interpret that data and make sense of how A, C, T, and G encodewho we are, and, in a greater leap of science, who we will be. In recent years,the scientific advances in this field are nothing short of revolutionary.Yet, they are only the start of ajourney to comprehend and apply the power of genomics. In addition to interpreting the genomic building blocks ofwho we are, life has thrown us another twist in the form the age-old “nature v.nurture” debate. This debate focuses on how social, economic, and geographicfactors shape our lives. It asks, “Would I be the same person I am today had Ibeen raised in California, rather than Alabama?” It’s nearly impossible toknow. Personal health is a reflection of our personal lives. Aunique genetic makeup combined with the socio-economic factors and stresses indaily life ultimately shape the moving target that is an individual’s health.In advancing scientific discovery in the medical field, the combination ofgenomic and personal health data promises enormous potential in understandinghow our lives, and our health, unfold. Applications of those discoveries inclinical medicine and personal wellness potentially can change the way we live.This moment is an exciting time for life-changing medical science. Without fanfare, the excitement of discovery and the promiseof application must be bridged by a solid data infrastructure. The datainfrastructure must support the needs of those who dare to dream about crossingthe horizons of health care. It’s timefor 0s and 1s to support A, C, T, and G in a simple, yet elegant, way thatfoundationally enables the wonders of the human experience. The tools of theinformation age -- rapidly evolving ,andon occasion, undervalued and underappreciated -- are critical to supporting themissions of scientific discovery and improvement in quality of life.

Adenine(A), cytosine(C), thymine(T), and guanine(G), four simple base pairs, encode life as we know it, and provide the basis for the DNA to create and mold our lives. A single human cell is more...



The past year has been a whirlwind of awesome experiences –great clients, focus groups, and solving clinical analytic problems.Previously, I have compared clinical analytics to Geocaching and searchingfor a way to connect all the dots. Coming from industry, I had always searchedfor an analytic platform that could go end to end across my entire suite ofenterprise users: CFO, CNO, CMO, CMIO, Pharmacy, Radiology, Budget Office, HR, well,you get the point. Often, I was left with few choices and relegated to a one-off boutiqueset of analytics. I’d then direct myteam to replicate, once again, another set of Extract, Transform and Load (ETL) processes for sending out data (yet again) to someone else to do theanalysis. I knew we were smart enough,and we had our data! Geocachingis an art and science. One needs tools “science” and the “art” of touch, oflistening and hearing. This past year, I’ve shuttled West Coast to East Coastsupporting inspiring leaders who know that they, too, have the data, are smartenough, and want a solution that can support their vision of a pre-builtclinical data model, translational research, and market leading businessintelligence! My greatest joy hasbeen the multiple cycles of sitting with great physicians, researchers, andoperational leaders and discovering their analytic questions. Often, ITprofessionals with great business analysis get different requirements fromdifferent stakeholders and are left in a void thinking “how do these two thingsrelate?” What ajoy it is for me to conduct these focus groups and hear how each, an internist,an emergency room physician, a care coordinator, describes his/her analyticalneeds. Though they each use different words,we discover that they are all talking about the same intersection of data. A blissful end arrives when I am ableto answer their questions, address their use cases, and align to their datarequirements. Better, yet, still,is the delivery of Oracle’s end-to-endanalytics suite that addresses their needs. It supports their identified and immediate use cases. It also is ready to extend forward forchanging federal reporting requirements, population health, and marginanalytics. Join me in this discussion. #AcceleratingtheEvolutionofHumanCare

The past year has been a whirlwind of awesome experiences – great clients, focus groups, and solving clinical analytic problems. Previously, I have compared clinical analytics to Geocaching...

Life Sciences

Coming Regulatory Changes: Are You Ready?

The safety and pharmacovigilance landscape is changingrapidly. New global standards forregulatory reporting – E2B(R3), eVAERS, eMDR, and IDMP – represent some of themost significant changes that the industry has seen in over a decade. Notsurprisingly, implementing them requires planning and preparation from both safetyand IT organizations. Global vs. RegionalRequirements The new standards are adapted to the modern needs ofregulators and industry, as they offer better interoperability and robustness,fix issues with E2B(R2), or, in some cases, are the first electronic versions ofpaper reports, and allow for the collection of additional safety information,which translates to safer products for patients. Although the standards development organizations strove forglobal harmonization of these new reports based on HL7 integration, each healthauthority typically adds regional requirements – additional data elements – ontop of the core set of fields. At least each region will not reject messagesthat include additional data elements for other regions. Since the regulators develop these additional requirementsand implement the new standards at various speeds, the industry and softwaredevelopers are presented with several challenges. How can software supportmultiple interpretations of E2B(R3) being implemented at different times andstill support E2B(R2)? How can software be developed in a compliant fashion ifthe guidelines change between draft and final versions? How can companies planto uptake new software releases, taking into account budget, validation, andprocesses? The regional requirements are reflected in each healthauthority's implementation guide (IG). Each IG differs in its publication date,basic approach, pilot test programs, implementation date, and details includingadditional data elements. FDA CBER has required eVAERSsubmission since June 2015, and FDA CDRH has required eMDR submission sinceAugust 2015. FDA CDER has announced that they plan to publish a draft E2B(R3)IG in December 2015. PMDA will accept E2B(R3) reports optionally starting April2016, and mandatorily beginning April 2019. EMA will require retrieval ofE2B(R3) reports from its website and also allow optional submission of E2B(R3)around the middle of 2017. Safety software developers need to solve the challenges thatthe different IGs present. Following the software development lifecycle basedon draft guidelines is the first challenge, because draft guidelines may not becomplete in all topics and details, and the content may change quickly duringpilot testing. Developers need additional time to ready and release the finalsoftware after publication of the final guidelines, depending on the volume ofchanges identified during the pilot. Multiple releases may be required tohandle regulators' different timelines for IG publication, pilot tests, andindustry compliance. Identification ofMedicinal Products (IDMP) IDMP is a new global standard that seeks to uniquelyidentify products based on hundreds of attributes, ultimately to ensure betterpatient safety. It is becoming mandatory in Europe in 2017, and there is aninterplay between IDMP and E2B(R3). IDMP differs from the Extended EudraVigilance Medicinal Product Dictionary (XEVMPD) in that it isglobal rather than European, and it defines many more fields than XEVMPD. IDMPmeans big changes for regulators, patients, and companies. However, more thanjust being a compliance issue in Europe, IDMP should be seen as a progressive opportunityto transform the way life sciences companies operate as they move towardscross-functional transparency for products. Developing an IDMP implementation strategy often begins withthinking about IDMP as an extension to XEVMPD. However, manual XEVMPD processes do not scale to IDMP data setsizes, and most organizations quickly mature to a master data management (MDM)approach. This is certainly an improvement, because data is integrated more orless automatically with MDM; but an optimal implementation goes one stepfurther, culminating in a cross-functional integration of people, data, andprocesses. Solving theChallenges of Industry As the industry adopts these new standards, it will need toovercome its own challenges. Companies will need to plan for a software upgradefrom their current version to a new one which is compliant. They will also needto align budget and resources to uptake the new release, scope the upgradeproject, and decide whether it will be focused on compliance alone, or alsoinclude implementation of new functionalities. Considerations should include changes to technology,configuration, and SOPs, as well as validation and user training. Companieshave to identify the required timelines for testing and compliance with theregulators, and determine whether to engage in multiple smaller upgrades orinstead plan for a single large upgrade. The new standards mean not just atechnology change, but a business process change, as well. Defining an upgrade strategy and plan is paramount.Companies should identify the test and compliance dates which must be met, thenwork backwards and include a buffer. They must decide on a software version andimplementation strategy – one larger upgrade or iterative smaller upgrades.They should perform a detailed scoping of the project and determine the lengthof the project from start to finish through a detailed plan. Finally, they needto align business priorities, project sponsors, and governance to that plan. Life sciences and healthcare are converging and the newstandards are at the forefront. The life sciences standards E2B(R2) and XEVMPDhave come together with the healthcare standard HL7 to create E2B(R3), eVAERS,eMDR, and IDMP. In the end, this convergence will lead to safer products.

The safety and pharmacovigilance landscape is changing rapidly. New global standards for regulatory reporting – E2B(R3), eVAERS, eMDR, and IDMP – represent some of themost significant changes that the...

Life Sciences

The Journey to Clinical Data Management v3.0 – Disruption Is Coming

Do you remember the old paper case report form (CRF)? Iremember working on an oncology trial. It was ‘massive’. It was massive becausethe CRF was 300 pages long, in a binder you could barely lift with one hand, andthere were 300 subjects. Let’s say, on average, there were 30 fields per page, whichwould equate to about three million data points collected over a five yearclinical trial. We thought of this as a large trial, and yet from a dataperspective, it could easily be stored on my smartphone. I’m calling this period Clinical Data Management v1. Then, there was a revolution called Electronic Data Capture(EDC). This was a fantastic concept of gathering clinical data into the drugdevelopment data life-cycle in real time. While there have been significant gains from EDC, still thirty years afterits inception, its key tenant, bedside data capture, has yet to be realized.This was Data Management v2. Then, the accountants got involved. They wanted moreproductivity and reduced costs. At thesame time, the scientists and statisticians got smarter by inventing newscanners, devices, biomarkers, and adaptive trial techniques. To this end, there was an explosion of outsourcing to contractresearch organizations (CROs), along with new data source providers and tools, suchas specialist labs, imaging, and electronic patient reportedoutcomes (ePRO), to name just a few. What had started as an initiative to movefrom paper CRF to electronic CRF, had suddenly turned into a much more complexdata management challenge. This is today. It is a world of multiple data sources, frommultiple providers, which must be standardized and integrated quickly todeliver statistical conclusions on efficacy and safety. This is DataManagement v2.5. In software terms, this is the .5 patch, a partial upgrade on our way to all things electronic (e). Pausing for a moment on this journey, let’s take a quick look outside our office windowand see what other industries are doing. In just one minuteUber starts 694 rides, YouTube users upload 300 hours of video, and Facebookpublishes four million“Likes” from friendly folks. Of course,we should also mention the CERN Large Hadron Collider producing over 30petabytes of data a year. My ‘massive’ oncology trial, of approximately 100megabytes, suddenly becomes a grain of sand in a beach of data. Can you see DataManagement v3 coming? It’s in development. It is possible. It can happen. It will be a world far removed from data managers who focuson identifying data inconsistencies, such as ‘missing gender’. It will be aworld removed from clinical monitors flying in person to sites to verify thetrue clinical record against the transcribed, EDC data; and it will be a world withoutstatisticians worrying about limited statistical power for their analyses. Data Management v3will embrace eSource, sourcing data directly from electronic health records(EHR) systems, devices, and more. The concept of a clinical visit will changesignificantly, become event based, and focus on real life data captured as acontinuous stream. The enablers of this vision are here today, and are evolvingrapidly. Devices, sensors, and wearables are raining down on consumers. Theywill become the norm for clinical trials. Official regulator groups, such asthe FDA, already are requesting dialogues with the industry on how best to maximizethe sensor/mHealth revolution. Further, theconvergence of EHR systems around interoperability is being driven bygovernment/payer demands, which the clinical trials industry can leverage toachieve true eSource. Finally, technologies, such as big data analytics andartificial intelligence, are advancing at an exponential rate. Initiatives inother industries, such as improving sports performance, predicting fluepidemics, and fully autonomous cars, are creating dynamic new tools andmethods which we can exploit to gain further insight from our clinical data. Disruption is coming. Some facets of clinical trials drivingdata management (transcription, data lag, and discrete, visit-based encounters)are about to disappear. Database Lock, a concept based on an arbitrary definition ofcompleteness and cleanliness of data, will be banished to the historybooks. Clinical data management willevolve quickly into the management of data specification, acquisition, andcuration. The true winners in this new world will be those who learnto exploit technology across the continuum of clinical data. The winners willbe those who can find, explore anddiscover new hypotheses, patterns, trends, and conclusions to drive out newtherapies for better patient health. Clinical trials just got exciting. Are you ready to embrace thechange?

Do you remember the old paper case report form (CRF)? I remember working on an oncology trial. It was ‘massive’. It was massive becausethe CRF was 300 pages long, in a binder you could barely lift...

Life Sciences

Cost & Quality

Reformed accountant alert! Decades ago, I studied to be an accountant. Why? Because my Alma Mater told me to be a good healthcare administrator, Ineeded to have a financial/numbers training. I could pick Economics, Finance orAccounting. I called my family and adultfriends for advice. (Those in the samplegroup were all CPAs.) And, surprise, theirunanimous opinion was for me to major in Accounting. I survived after four and one half years. I still have thethird edition of Keso and Weygandt on my shelves, and I still have my plastic,accounting, internal-control, flow-chart stencil (a “pre-historic” version of Visio). But, I graduated and became a cost accountantin manufacturing. Yes, working on direct overhead, indirect materials, work inprogress (WIP), burden rates, unassigned project time, I would dream about themonthly close. But, I had a methodologyand a rigor to establish how much a widget would cost and what was involved inits production. Then I had to allocate theassignment of overhead (square feet, headcount, etc.). At every month’s close, I knew precisely whatmy engineers had in WIP and what they had produced on the line. There were none of the nine crazy layers ofallocation methods and step-downs that we experience in the Medicare CostReport. So why is this nostalgia important? Because, 25 years later, healthcare isfinally at a tipping point. It DOES matter precisely what it takes to performhealthcare services. It is no longeracceptable to allocate equipment depreciation to all departments based onheadcount. I’ve been at organizations where the pediatric clinic hadnearly no capital equipment, yet was taking their “fair and equal share” ofdepreciation based on staff levels for the gamma knife that surgerywanted. That is not fair! How about the organization that doesn’t allocatedepreciation to the units/departments, but keeps it all at the top level? No one in manufacturing would ever agree torun a unit that was disproportionately charged with expenses without thebenefits associated with the expense. But for some reason, healthcare has this crazy math. It’s been done thisway for so long that “oh well” works. Enough! There must be a way to get to the true cost of care. This isimportant in order to assign clinicalintervention decisions that produce the best outcomes in both clinical qualityand financial savings. Ghosting theexpenses with Houdini math doesn’t help anyone. I’m thrilled to talkabout the cost & quality intersection. This collision of activity-based costing and quality events is themarriage of Oracle’s Hyperion suite and Enterprise Healthcare Analytics. Industry changingdecision support is valuable when one knows not only how much something cost, but also what it costin terms of labor/materials/overhead and what quality of outcome did it produceon a line item patient/encounter/procedure level for a true “outcomes &Quality” analysis. Did the patient stay longer? But did his/her procedure requireless direct labor? Did the patientreturn for three outpatient visits rather than five because of the medicationtherapy prescribed? These are all great questions that are at the precipice ofthe 2015 healthcare horizon. Join me in this discussion#AcceleratingtheEvolutionofHumanCare

Reformed accountant alert! Decades ago, I studied to be an accountant. Why? Because my Alma Mater told me to be a good healthcare administrator, Ineeded to have a financial/numbers training. I could...


Oracle Open World 2015 Re-Cap: Brazil and Ireland – Different Size Populations, Similar Results

To the uninitiated, Oracle Open World (OOW) is complicated, hectic, and confusing, as to what to attend. To the well-initiated, OOW is always complicated, hectic, and forever difficult to decide what to attend and what to give up! However, being part of Industry Central at OOW helps, as attendees and speakers focus on specific industry domains. I was at OOW 2015 to host a few Oracle Healthcare Master Person Index (OHMPI) customers. Two of these customers also presented Healthcare Industry conference sessions. The two OHMPI customers presenting sessions were Richard Corbridge from Health Service Executive, Ireland (HSE), and Aclair Braga, representing the Ministry of Health (MoH), Brazil. Both of these projects were nation-wide implementations of OHMPI implementing national identifiers for the entire population. Even though the populations of these two countries are dramatically different, their need for a person index and the OHMPI results delivered were very similar. Accurate patient identification prepared them to meet their goals for Electronic Health Record (EHR) services. Ireland HSE Initiative Richard Corbridge, CIO Health Service Executive, Ireland HSE, spoke about how Ireland turned data into information for insight by creating a system of Individual Health Identifiers (IHIs). IHIs are unique, non-transferable, life-long numbers for each Irish citizen that do not hold any clinical information, but simply serve as detailed demographic records. The core component used by Ireland to implement IHI is OHMPI. HSE saw highly satisfactory match rates upwards of 95 percent (%) using OHMPI. Having a reliable index of patients well serves eHealth Ireland’s goal of offering integrated, patient-centric, and efficient care delivery. MoH, Brazil Project Aclair Braga CEO of CDS, an Oracle Platinum partner and implementation services provider to Ministry of Health, Brazil, cited some astounding numbers in his presentation. Brazil had 320 million records loaded into OHMPI before deduplication. Of these, 20 percent (%) were identified by OHMPI as same person records. Indexing services were expected to help 43 systems that interact with OHMPI. MoH is in production with OHMPI, and go-live was in February, 2015. Having OHMPI in place to provide appropriate matching and cross-referencing paves the way to achieve national EHR for all of Brazil.Ireland’s population is less than three percent (3%) of Brazil’s population, but both countries have similar challenges and goals. Oracle is a key partner in each of their eHealth initiatives and takes great satisfaction in being able to work with them to meet their goals!

To the uninitiated, Oracle Open World (OOW) is complicated, hectic, and confusing, as to what to attend. To the well-initiated, OOW is always complicated, hectic, and forever difficult to decide what...

Health Sciences

Prix Galien 2015 Re-Cap

The progress of thebiopharmaceutical industry is roaring ahead at breakneck speed. There are morethan 7,000 medicines in development globally, up nearly 30 percent (%) fromjust three years ago1, with 40 percent (%) ofthese medicines-in-development having the potential to be Precision Medicine(personalized) treatments1. The industry has created 3.4 million newjobs, and R&D spending in 2014 was 10 percent (%) higher ($51.2 billion)than just five years previously in 2009 ($46.4 billion) 1 Data, aBridge between Industries, a Vector towards Innovation One of the root causes of this explosion of progress and momentum is adata-centric partnership between the biopharmaceutical and technologyindustries. More specifically, it’s themeteoric advances in the technology industry’s ability to manage and analyzethe huge volumes of data created by biopharmaceutical and related healthindustries. The momentum of therapeutic advances is increasing daily. Yet,there is still much work to be done in facilitating stronger biopharma–technology collaborations.  TheGalien Forum Last week, in line with its goal to serve as a catalyst for innovativebiotech – technology partnerships that that impact human health, the Galien Foundationheld its annual Galien Forum and Prix  Galien2 Awards Dinner. These events broughttogether leaders from industry, academia, and government to discuss current pressing healthissues and recognize innovation that moves human health forward. During the day-long Galien Forum, on a paneldiscussing US Healthcare Reform, Jonathan Sheldon, Oracle Health Sciences VP, ProductStrategy, offered insightful opinions on how data driven intelligence couldsupport Precision Medicine initiatives and how data sharing could movepopulation health forward. Other Forum panels included discussions on:Bioengineering for Better Health, autism, microbiota in human disease, thepromise of gene therapy, neurodegeneration, and the crisis of heart failure. Panelists, to name a few, included a who’s whoof research and healthcare: BengtSamuelsson, Chair, MD, PhD, Nobel Laureate, Former President of Karolinska Instituteand Former Chairman of the Nobel Foundation Margaret Hamburg, former US commissioner of the FDA Jeff Gordon,MD, Robert GlaserUniversity. Professor & Director Genome Sciences and Systems Biology Center, Washington University Roy Vagelos, Chair, MD, Chairman of Regeneron Pharmaceuticals and Retired Chairmanand CEO, Merck RichardAxel, Chair, MD, Nobel Laureate, Co-Director, the Kavli Institute for BrainScience, Columbia University  Kenneth Frazier, President and CEO Merck John W. Rowe, MD,Professor of Health Policy and Aging, Columbia University; and MarcTessier-Lavigne,Chair, MD, PhD,President, Professor Laboratory of Brain Development and Repair, TheRockefeller University All during the Forum, OHS offeredtweets and posts on panelists’ comments and opinions. In all, Oracle posted 23messages on Facebook and 106 tweets throughout the day on Twitter. Prix Galien Awards Dinner The Prix Galien Awards Dinner took place at the venerable AmericanMuseum of Natural History in Manhattan. The biotech star studded guest list enjoyed cocktails and horsd’oeuvres in the Museum’s Grand Gallery and dinner in its Whale Room. Duringthe cocktail hour all 42 award nominees were photographed holding the PrixGalien medal. (These were tweeted on Oracle Health Sciences CROAdvantageTwitter Channel.) Duringhis welcome address, Steve Rosenberg, Oracle Health Sciences SVP and GM,offered informed comments, noting that Oracle’s solutions contributed data driven intelligence to nineout of ten of the Prix Galien award nominees. He also extended warmcongratulations to all the award nominees for their tremendous achievements. Also speaking at the dinner, Dr. William C. Campbell, 2015 Medicine Nobel Laureate, offered anecdotes in relation to his research, and Prix Galien Pro Bono Humanum Award Winner, Dr. MaryClaire King, provided keynote remarks on breakthroughs inbreast cancer. The 2015 Prix Galien Award winners included: BestPharmaceutical Agent: JanssenBiotech & Pharmacyclics' Imbruvica, a newtherapy for lymphocytic leukemia and mantle cell lymphoma. Best Biotechnology Product: Bristol-MyersSquibb'sOpdivo® and Merck's Keytruda®. Both products treatfor melanoma or metastatic non-small lung cancer. Best Medical TechnologyProduct: T2Biosystems' T2Candida Panel, a diagnostic panel for the quick detection and monitoring ofCandida infection and sepsis. The Galien Forum and the Prix Galien Awards Dinner were not only a triumph for the Galien Foundation and Oracle Health Sciences, a Platinum Sponsor, but they were also a very enriching, educational experience for participants, panelists, and prize nominees, alike. 1 Accordingto PhRMA 2 The Prix Galien wascreated in 1970. It was named in honor of Claudius Galenus (c. ~130–200 AD),considered the father of medical science and modern pharmacology because he wasthe first to use experiments to probe body functions. Worldwide, the PrixGalien is regarded as the equivalent of the Nobel Prize in biopharmaceuticalresearch.

The progress of the biopharmaceutical industry is roaring ahead at breakneck speed. There are more than 7,000 medicines in development globally, up nearly 30 percent (%) from just three years ago1,...


Immunotherapy: The Next Weapon Against Cancer

As we’ve recently heard that water on Mars means we can usewhat’s already there to advance our goals, so to, we can use what’s intrinsicallyavailable in our bodies to advance our fight against cancer. Cancer immunotherapy is a treatment that uses the body's ownimmune system to fight cancer. Though there are multiple types of cancerimmunotherapies currently available, evidence shows that the use of immunecheckpoint inhibitors can cure multiple types of cancer, and this is drawing alot of attention. In order for our immune system to attack tumor cells and leavenormal cells alone, it must first differentiate the tumor cells from the normalcells in the body. A checkpoint proteinacts as an on/off switch that can activate our immune system to determine whethera cell is a tumor cell or a normal one. Often, to hide from the immune system’s detection, tumorcells produce certain proteins that bind to the checkpoint proteins, turningoff the “checkpoint”. It’s as if thesetumor cells are wearing camouflage jackets. As a result, tumor cells canrapidly divide and grow without any checks or balances. Recent scientific efforts have focused on a subtype of immunotherapywhich turns on the “checkpoint” to activate the immune system. In somepatients, the immune system starts to attack the tumor cells, and the tumorstarts to shrink in size. In others, the immune system becomes too active and attacksnormal organs in the body. Like a fire out of control, the immune systembecomes unregulated; and it is deadly. The scientific community still has many questions regarding whyimmunotherapy only works for certain patients, but not all. Preliminary findings of several studies haveshown that the efficacy of immunotherapy is related to tumor immunologicmicroenvironment and mutation burden. These studies utilized a combination ofDNA sequencing and RNA sequencing to measure the tumor immunologicmicroenvironment and the mutation burden. As immunotherapy becomes the standard protocol for cancertreatment, RNA sequencing is anticipated to play a more important role incancer diagnostics, like the role that DNA sequencing plays today. The promise of immunotherapy is greater than ever before. Inthe near future, a next step will be to engineer a fully “personalized”immunotherapy vaccine based on each individual patient’s extracted tumor cells.This fully “personalized” vaccine will be injected back into the patient tofight against the remaining tumor cells in his/her body. Maybeone day, getting cancer will be like catching the flu!

As we’ve recently heard that water on Mars means we can use what’s already there to advance our goals, so to, we can use what’s intrinsicallyavailable in our bodies to advance our fight against...


The Secret Sauce of Healthcare

Is there a secret “healthcare” sauce? Is there a Heinz 57bottle of magical powers to unlock it? No, I don’t think so. Rather, I believe that healthcare has become so large, so massive, sofederally intertwined, that the complexity of the industry creates a lack ofinformation between stakeholders and participants. So the few of us who think end-to-end about patientcare and administrative/operational needs become “wizards” with special pixie dust powers. I have no pixie dust powers. I have only 20 years of experience in which I was shoulder to shoulderwith primary care providers, specialists, nurse educators andadministrators. My career is the Forrest Gump of healthcare. Somehow, I was always at the right spot atthe right time. My experience includes population health and military medicine,academic medical center, cancer specialty, meaningful use, and now, enterprise-wideanalytics. But it’s not the notches onthe resume that are the secret sauce. No, it’s the perspective. I sat shoulder to shoulder with my friends who were seeing25 plus patients a day, in a clinic backed up with dissatisfied patients and gaps in care. These medical friends wanted solutions. They wanted an “easy” button to help theirpatients. But, they also wanted a systemthat didn’t make it harder to do the right thing. They wanted analytics to aid them in theircare plans, guide them in their chronic condition management, and moreimportantly, they wanted analytics that could identify the right acuity fortheir specialty and appointment type. If we all could think of ourselves as providers trying to dothe best that he/she can, what would we want? We would want a provider champion! We would want an ally. We wouldwant someone who heard our needs, understood limits, and was able to reconcileit with the front administration office. For those of us who can walk in those shoes and balance administrativemandates/clinical throughput, we would have the physicians cheering. Physicians' cheering translates into broad adoption. This meansbetter outcomes and performance. Thepatient wins. We all win!

Is there a secret “healthcare” sauce? Is there a Heinz 57 bottle of magical powers to unlock it? No, I don’t think so. Rather, I believe that healthcare has become so large, so massive, sofederally...


Integrated Cloud Applications & Platform Services