Pharma IT Blog

Blogging about hot topics related to Pharma and IT. Please use the mail icon to subscribe to new blog posts.

EudraVigilance – what are the implications

EMA will launch a new version of EudraVigilance on 22 November 2017 with enhanced functionalities for reporting and analysing suspected adverse reactions. This has implications for all pharmacovigilance systems in the EEA, but the degree depends on your current set-up.

Pharmacovigilance, the science of monitoring medicines for potential adverse events, have traditionally been viewed as a national responsibility, but with the growing globalisation and acknowledging that adverse events are not country-specific, several measures have been put in place to ensure that the adverse events are reviewed on a larger scale. EudraVigilance is a database implemented by the EMA in the EEA countries to be able to share information easily and review it centrally to ensure signals are detected as early as possible.

With the implementation of the ICH E2B (R3) format the EudraVigilance needed updating. Now the new version has been set up and audited and a date has been set for the go-live of the new version, 22 November 2017, so now it is ready for the next step; namely for testing with individual Marketing Authorisation Holders (MAHs). National Competent Authorities (NCAs) within the EEA should also start their testing, but I will not touch further on this in this blog post.

Today, most MAHs and Sponsors of clinical trials that operates within the EEA countries have a database set-up that allows them to communicate with the Eudravigilance system. Larger MAHs/Sponsors would normally be using a safety database that can submit individual case safety reports (ICSR) directly via the E2B-gateway, whilst smaller MAHs/Sponsor would normally have two parallel systems; one system that allows them to compile aggregated reports and perform signal detection (and is not able to do E2B-reporting), and the second system consists of a direct access to a unique EudraVigilance-account for entering SUSARs from clinical trials and/or reportable post-marketing ICSRs.

EudraVigilance figure

Figure 1 Overview of the EudraVigilance system from http://www.ema.europa.eu/ema/index.jsp?curl=pages/regulation/q_and_a/q_and_a_detail_000166.jsp

 

The new version of EudraVigilance from 22 November 2017 does not prompt MAHs and Sponsors to change their basic set-up. However, as the system does change significantly, it is important to make a Change Management Plan focusing on People, Technolocy, Process and Information.

Large MAHs

For large MAHs and Sponsors that today report directly via their safety database, EudraVigilance will support reporting in E2B (R3)-format, while still maintaining the possibility of reporting in the E2B (R2)-format. But be aware that the download functionality will only be in the E2B (R3)-format, so it is necessary to be compatible with this version, if you have products on the market, to receive all European ICSRs, as they will no longer be submitted by the national competent authority.

It is therefore strongly recommendable to discuss with your database provider how to ensure that the database lives up to this format. All large database providers have known and prepared for the E2B (R3) format for a long time, so you should be ready to jump to the testing phase without much ado. For business rules regarding ICSR reporting in E2B (R2) and (R3) format, see the following:

R2: Note for guidance – EudraVigilance Human – Processing of safety messages and individual case safety reports (ICSRs) (EMA(H/20665/04/Final Rev. 2)

R3: EU Individual Case Safety Report (ICSR) Implementation Guide (EMA/51938/2013)

Testing is ready to begin at any time, since EMA has completed its internal tests. To do this it is necessary to have a test database and to be able to send in E2B (R3) format. If E2B (R3) is not fully operating yet, a test should be done with a conversion tool, and as soon as the MAH is ready to report directly via E2B (R3) the test should be redone. For the testing procedure, see EU ICSR Implementation Guide, chapter I.C.2.1.5. The test is with EMA only (no NCA-testing).

For system failure, it is no longer permitted to send CIOMS forms via fax or other methods; the MAH will need to report electronically as soon as their system is available again. If the system failure occurs in EudraVigilance, the MAH has two calendar days to re-submit after system failure has been resolved and any late cases will be excluded by compliance calculations.

Small MAHs and Sponsors of clinical trials

For smaller MAHs and Sponsors that today report directly in the EudraVigilance system by re-entering the ICSRs, it is important to use the time from now until 22 November 2017 to get accustomed with the new functionalities, and for that purpose the EMA has already now developed several tools that should help you do just that.

EMA is hosting a series of face-to-face trainings days and information days. On top of this, EMA has developed EudraVigilance and Pharmacovigilance e-learning videos, guidance documentations, user guides and webinars that are available on the webpage free of charge. Link

The main changes are that the EVWEB application will be rewritten to ensure it supports additional browsers (Firefox, Chrome, Internet Explorer version 10 or above) which will hopefully eliminate a lot of frustration moving forward, all new ICSRs are in in E2B(R3)-format only, meaning new user interface, changed data structure and additional data elements.

It is therefore strongly recommended to evaluate which information is not in the current version but will be required in the next version of EudraVigilance and ensure this information is collected routinely.

Other changes

Apart from the implication on submission of ICSRs (including SUSARs) there are numerous other changes, such as changes to the duplicate detection, lack of support for submitting to other recipients via E2B-gateway, process for MLM, etc. Signal detection should be strengthened with the new version of EudraVigilance enabling MAHs access to better signal detection and analytical functions. This could be a good opportunity for smaller MAHs to strengthen their PV system.

For all types of MAHs and Sponsors of clinical trials it is strongly recommended to perform a gap analysis of existing SOPs, working instructions, etc. to ensure that the new functionalities are utilized and most importantly to ensure your pharmacovigilance system is compliant with EMA’s expectations of you – also after 22 November 2017.

Pharma IT has an expanding pharmacovigilance team consisting of senior and junior pharmacovigilance professionals as well as expertise in pharmacovigilance database system set-up and maintenance, feel free to reach out to us regarding any issue related to E2B (R3) or EudraVigilance update.

 

Continue reading
337 Hits
0 Comments

EU General Data Protection Regulation (GDPR) and Clinical Trials in Pharmaceutical Industry

The EU General Data Protection Regulation will become effective on the 25th of May 2018. Pharma IT offers several services in respect to the implementation of GDPR in pharmaceutical companies.

GDPRAll companies within the Pharmaceutical sector will need to assess their use of personal data no matter if it relates to employees, partners, suppliers or patients. Not all processes and systems that handle personal data will be in scope for the GDPR regulation, but without a basic assessment companies risk to be fined by the rather large penalties that is part of the regulation.

In this article, we will discuss GDPR regulation in respect to patient data in clinical trials in the pharmaceutical industry. The article will not cover all aspects but it will give an introduction and will by example discuss some of the initiatives that might be needed to comply with the GDPR regulation.

In Figure 1 an overview of the clinical trial route map is shown. On the figure we have marked 3 places (green stars) where we believe that personal patient data should be considered in respect to GDPR. It will not be the only parts of the Clinical Trial Process that will need to be handled in respect to GDPR, but it will be the scope of this article.

Before continuing let’s discuss some basic elements of the GDPR. Below we are assuming basic knowledge of GDPR definitions (Article 4 of the regulation).

Basic elements

Scope: The GDPR applies to processing of personal data wholly or partly by automated means, and the processing other than by automated means of personal data which form part of a filing system or are intended to form part of a filing system (Article 2).

The above wording implies that all IT system that are handling personal patient data within the clinical trial process is in scope of GDPR. That said handling of personal patient data in documents can also be in scope depending on process they are part of and if they in the end will be stored in a filing system.

Processing of special categories of personal data: The GDPR prohibit processing of personal data revealing race, ethnic origin, genetic, biometric and health data with exception if the processing is authorised by a law providing appropriate safeguards or the processing is necessary to protect the vital interests of the data subject (Article 9).

Within Clinical Trials as well as the Pharmacovigilance process, the mentioned data types may have to be stored and processed, but with the exceptions mentioned this should still be possible in the execution of clinical trials. The above elements should be part of assessing GDPR against company processes and should eventually be part of the Data Protection Impact Assessment (DPIA) for the individual process/IT system.

Right to erasure: The GDPR provides a right for the data subject to obtain from the controller the erasure (removal) of personal data (Article 17) unless the processing of the data fall under the previous mentioned Article 9 or Article 6, “Lawfulness of processing” which states “for compliance with a legal obligation to which the controller is subject”.

Given that the personal data stored as part of the clinical trial is relevant for the study in questions these articles prevent the need for erasure of personal data when as an example the clinical trial regulation requires clinical trial data to be stored for 25 years after the trial has ended.

Records of processing activities: GDPR states that each controller and controller's representative maintains documentation of all processing systems and procedures under their responsibility (Article 30).

The above requirement will most likely be the major task for most pharmaceutical companies. Via the Data Protection Impact Assessment (DPIA) systems and processes should be identified and assessed. The DPIA should form the basis to identify the required activities - one of those being implementation of a documentation system including guidance for all parties involved on how they should behave in the processing of personal data if this is not already in place in the company.

Data Protection Impact Assessment: GRPR states that a Data Protection Impact Assessment shall be required (Article 35) if processing on a large scale of special categories of data referred to in Article 9 (race, ethnic origin, genetic, biometric and health data) takes place.

It is not defined in the regulation what is meant by large scale but unless clinical trials are very small they will most likely be in scope for the GDPR.

Data Protection Officer: The GDPR states that a Data Protection Officer shall be designated in any case where the core activities of the controller or the processor require regular and systematic monitoring of data subjects on a large scale (Article 37).

It is not defined in the regulation what is meant by large scale but unless clinical trials are very small they will most likely be in scope for the GDPR.

The GDPR contains more elements than mentioned above, but for now we will continue this article by discussing some of the more specific process examples.

Process examples

Clinical Trials

Figure 1: clinical trial route map (source: Clinical Trials Toolkit, National Institute for Health Research)

The three examples selected in the figure are the following:
1. Informed Consent
2. Statistical Data Analysis and Clinical Trial Reporting
3. Safety Reporting

Informed Consent

Informed consent is already today an integrated part of executing clinical trials. GDPR will force additional requirements on the process of informed consent. Chapter III “Right of the data subject” of the regulation contains many requirements - specifically Article 13 “Information to be provided where personal data are collected from the data subject” – that must be incorporated into the informed consent process.

Statistical Data Analysis and Clinical Trial Reporting

During the cycle of the clinical trial, personal data is collected for statistical data analysis and clinical trial reporting to support the original purpose of the trial. This process should be assessed for compliance with GDPR.

Execution of clinical trials is usually handled in a complex setup where the pharmaceutical company/sponsor contracts with CROs that handles the contact to hospitals and healthcare professionals that enrol patients in the study. The sponsor can also handle the contact to hospitals and healthcare professionals directly and within a given study the setup can vary from country to country or region to region. Even though the patient data collected is pseudonymised* company representatives might when monitoring the trial have had direct access to personal information or combined pseudonymised* data that in some cases might be sufficient to identify individuals. In all circumstances, it is the responsibility of the sponsoring company to ensure that personal data is handled as per GDPR requirements either as a joint controller setup (Article 26) or by regarding the CRO, hospitals and healthcare professionals as processors working on behalf of the sponsoring company (Article 28).

Safety Reporting

When running clinical trials, or managing marketing authorisations for pharmaceutical products companies must ensure proper handling of adverse event reporting.

Individual Adverse Events is personal data and normally the data should be pseudonymised*/anonymized (which is in line with Article 6) but from time to time mistakes happens and personal data is received as part of receiving the adverse events (for example laboratory reports or similar with patient identifiers sent in error). It is important that guidelines in respect to how such information is handled are in place and companies should also consider if safety measures should be established in respect to the fax, email, shared folder or IT system that is used when adverse events are received and when the received information is archived. As an example, processes for revoking user access to shared email accounts need to be in place and user access should be reviewed with regular intervals.

Besides the individual adverse events companies are also obligated to perform several types of aggregated reports and signal detection. The data is still pseudonymised*/anonymized, but in some cases companies might in the process of generating aggregated reports or performing signal detection be using Excel or similar extracts with data from the received adverse events. The data or intermediates in the process might be stored on local drives, shared drives, SharePoint sites or similar. In respect to GDPR it should be considered if the safety and security measures regarding the handling of this data is sufficient – again the focus should be on user access management, so that only the right people have access to the data at any given point in time.

Recommendations:
In respect to GDPR Pharma IT recommends that the following activities are planned and executed:

  • Create a process overview of departments handling personal data
  • Map the personal data processed into the process overview
  • Map the existing security measures and compliance to the GDPR
  • Identify gaps to the GDPR in the created process and data flow overview
  • Create Data Protection Impact Assessment document based on the above collected information
  • Initiate project/tasks to close the identified gaps (if any)

In many cases, existing systems and processes will be sufficient and compliant with GDPR, but the full process and IT systems will still need to be reviewed in the view of GDPR and where gaps are identified these should be closed before 25th of May 2018.

* the processing of personal data in such a manner that the personal data can no longer be attributed to a specific data subject without the use of additional information

the processing of personal data in such a manner that the personal data can no longer be attributed to a specific data subject without the use of additional information

Continue reading
1604 Hits
0 Comments

Conclusions on SAP ATTP implementation

We have been through a long a demanding project and is currently in hyper care.

I have tried to summarize the key points from the project, I hope they might be able to help if you are planning a similar project.

It’s been more than a year since my last blog post(http://www.pharmait.dk/index.php/blog/conclusions-on-proof-of-concept-sap-attp) and it’s time to give you an update on the project which I have had the privilege of leading.

Last time we had just finished the POC for SAP ATTP and had found that the solution did indeed meet the customer’s requirement. So, since then we have been in implementation mode.

We are now live with the solution and have been so for a little over a month.

To summarize our learnings, start with end users early - Serialization requirements take some time to familiarize you with, and the sooner the input from the users the easier it is to implement in the design. A universal truth from all IT projects, also applies to Serialization projects. J

Project phase

The implementation of SAP ATTP is not in itself an overly complex IT project. The serialization requirements, the ones already know, are straight forward, and SAP has ensured that ATTP meets all of them. It even in their license model, so no big surprises on the requirement side.

Solution Design

It’s important to know your SAP ECC implementation. If you are to implement SAP ATTP successfully, you will most likely need to update you RF scanner transactions.

The only reason not to do this is if you are exclusively producing for countries which does not have Serialized & Traced requirements. Even in that case, I would strongly recommend updating you RF scanner transactions to meet these requirements, later.

In my opinion SerializationSerialization requirements must be viewed as a funnel, all markets begin with 2D data matrix, then shortly transition to Serialization, then aggregation and in the end reporting. As you are already implementing SAP ATTP, my recommendation is to ensure that ECC is ready as well.  Otherwise you will need to run another project when your first market requires aggregation.

So, with these considerations out of the way we updated the RF transaction to support, all the existing warehouse processes, for serialized and Traced. That means that whenever we scan a pallet (or shipper box, or even bundle) ECC will check against ATTP if the quantity in ECC is the same as the serials in ATTP.

This will of course result in less flexibility in warehouse and production, as an example they can no longer use, MIGO for changing ware house status. They must use our custom build ones.

However, it also means that we have complete control of our serialised and traced products from Process Order Release until Issue of Goods.

That has reduced the need for reconciliation reports between our L3 system and ECC, as we now have complete control of what is produced and transferred from L3 and what is received in SAP.

Interfaces

Se picture below for the interfaces we developed.

 Integration

We developed interfaces from ECC to the L3 system for transfer of material master data and process orders. This is not related to serialization as these could just as well by inserted manually, but it eases operation with less risk of manual errors.

We also developed a serial number Request/response system from L3 to ATTP, this is a synchronous interface. We went live with ATTP 1.0 in 2.0 this can be done as asynchronous, which in most cases is the preferable solution.

We of course also have a commissioning interface between L3 and ATTP, for serialised materials it after the production of the batch has been completed. For serialized and traced it’s after each pallet. This is necessary, as we need to start ware house transaction before the production of the full batch is complete.

The final interface we developed was toward our 3PL in Korea, as Korea has requirement for Serialised and traced, this is an interface which sends the full hierarchy of serials to our partner in Korea. This is the most complex interface as it needs to handle multiple interactions, for instance samples, scrap, and returns,

Validation approach

IQ in SAP projects is simple, we checked the transport list in each environment we went through, as we were not changing environment configuration.

OQ, we did a full OQ of all functional requirements, this included all the scanner transactions and the interfaces, on our side. We did not OQ the interface on the 3PL side, but did a parallel OQ for the L3 system. This part was delayed and ended up postponing our PQ. The reason for the delay was the L3 supplier’s development effort to create the interface took longer than anticipated. They had only file exchange interfaces as standard, as we needed a synchronous interface for serial number request response we needed a web service.

After the OQ we conducted a full PQ, we had a production line available and tested a full process flow with, Non-serialized, Serialized, Lot based, serialized & Traced and non-finished goods.

This showed a challenge in setting up master data, as we had numerous production sites, but only 1 L3 site server, we could not test on all the sites materials.

We also had issues with getting LOT based material tested, and ended up descoping this from the PQ, as we currently have a manual solution in place, and the serialization requirements will be effective from November 2017 in the USA.

The PQ was the first time our end users tried the system hands on, this process showed that we needed a lot more training in serialization in general, and SAP ATTP specifically.

It also resulted in some minor changes to the design we had made. As well as changes to local procedures.

I would strongly recommend doing an end to end test with a real-life line if possible. This will reduce the number of issues found afterwards significantly. Especially Master data and Authorizations should be focus areas.

Cut over

We planned with a technical go live 3 weeks before the functional go live. This was an installation of SAP ATTP and implementation of the SAP notes needed in ECC.

This was to give us time to configure ATTP outside a closing window.

In the weekend we went live, we started installation on the 3 site servers Friday evening, and begun SAP installation Saturday afternoon. This was to keep SAP open for as long as possible, and because we needed to run the L3 installation 3 times, for that we needed more time.

We experienced a lot of issues during the cut over, primarily because of master data and authorizations. We saw issues with the materials where we had open process orders, this should normally not be an issue, but if you have the option, I would ensure everything is closed before starting the go live activities.

Hyper care

Since go live, we have had a high double digits number of incidents and about 30% is still open.

The high incidents we have had has been with SAP but not related to Serialization, and with L3 system. We have had incidents where batch data was not transferred from ECC to ATTP for serialized materials. This is an issue with ATTP as far as we can analyze, and we expect a note to fix this.

We have had a lot of issues with Master data, and authorization, and some issues with the interfaces from ECC to L3.

I would strongly recommend having people on site on all production sites when you go live, as working with serialization requires significant knowledge of changed processes. 

Continue reading
979 Hits
1 Comment

How to bring Pharma and Regulated Data to the cloud

Many companies, including pharmaceutical manufacturers – are changing their business model to focus more on core business capabilities and in doing so - outsourcing more or less of the IT business and processes. As a consequence regulated data are moved out of direct internal control of the company.

Discussing the topic of cloud computing cannot be done without considering security, risk, and compliance. Cloud computing does pose challenges and represents a paradigm shift in which technology solutions are being delivered.

The cloud may be more or less secure compared to the in-house environments and security controls of your organization depending on any number of factors, which include technological components; risk management processes; preventative, detective, and corrective controls; governance and oversight processes; resilience and continuity capabilities; defence in depth; and multifactor authentication.

Within general security framework e.g. ISO27001/02, the concept of CONFIDENTIALITY, INTEGRITY and AVAILABILITY (CIA) are the cornerstones and are equally important in pharmaceutical business and should be included in a risk based approach, meaning that IT security controls are implemented in a way which matches the risks they are mitigating.

Data integrity is critical to regulatory compliance, and the fundamental reason for 21 CFR Part 11 and EU GMP Annex 11, applying equally to manual (paper) and electronic systems, throughout the data lifecycle. Data integrity enables good decision-making by pharmaceutical business and should be ensured so that data integrity risk is assessed, mitigated and communicated in accordance with the principles of quality risk management. It is a fundamental requirement of the pharmaceutical quality system. Data lifecycle refers to how data is generated, processed, reported, checked, used for decision-making, stored and finally discarded at the end of the retention period.

DanArticle

Pharma IT propose a five-step approach for your company to assess How to bring Pharma and Regulated Data to the cloud in a compliant way, securing CONFIDENTIALITY, INTEGRITY and AVAILABILITY and mitigating risk accordingly.

  1. Identify the data (could be data classification)
  2. Perform a specific cloud risk assessment (incl. audit)
  3. Determine the level of confidence in the Cloud Service Provider handling CIA for the required data
  4. Identify controls covering the entire data lifecycle
  5. Provide proof of compliance for the entire data lifecycle

When the assessment of data and risk has been completed and possible mitigating activities has been described, the project can commence preparing for the qualification/validation of the cloud service. This is in principle a basic discipline for a pharmaceutical business, but dealing with Cloud Service Provider does require a slightly different approach.

When a pharmaceutical business acquires services where critical data is processed and/or resides outside internal control, the general quality level, validation activities and IT security planning must be maintained to an agreed and desired level. The approval of the validation report should focus on conclusions that proves that the service is fit for intended use, based on controls compared with the GxP risk.

Tags:
Continue reading
672 Hits
0 Comments

Pharma IT spoke at Knect365 Global Pharmaceutical Regulatory Affairs Summit about where to look for the benefits of IDMP

Knect365Pharma IT, Jakob Juul Rasmussen, IDMP Program Manager and IDMP Subject Matter Expert today day gave a presentation on where to look for the benefits when implementing IDMP i a Pharmaceutical Company.

Conference took place at the Maritim proArte Hotel in Berlin, Germany over the days from 18-20th of October 2016. 

In short Pharma IT recomends for larger Pharma companies to start looking at the end to end processes within the company. We recommend to look for areas of data duplication and find solutions that will electronify the data so that it can be used across the different business areas in the company in stead of being duplicated.

Currently we see examples in different companies of data being generated in one document in one part of the business and then being duplicated to another document in another part of the business - all being manual updated upen authority approval of variations on the different Marketing Authorisations. Examples of such documents could be the specification information in Module 3 of the CTD documents being duplicated in the purchase/quality specifications and material specifications used in the Product Supply area.

We also see a potential benefit of using data generated in Product Supply downstream in the company. Examples of such data could be data in labeling documents that can be used for quality check or performance checks on when labelling updates are actually implemented in the market. Another example could be finansial data that can be used to do regulatory checks to ensure that what is sold in a given country is also aligned with the approved regulatory baseline.

IDMP benefitsFurthermore we see a benefit in connecting the business through availably of data - imagine making the regulatory baseline readily available for people in any function, but as an example QA or product supply personal trough the item or SKU that linked to the registration which again should be linked to the submission or departmental documents/IT systems containing the source data in respect to any registraton baseline - adding on the ability to "real time" track the status of changes and what data impact a change have or will have or had historically will only make the search and navigation for information faster and lots of hours will be saved through out the company searching for information.

Should you need more information - please don't hesitate to contact us - link

Continue reading
806 Hits
0 Comments

Getting ready for Serialization: Conclusions on Proof of Concept with SAP ATTP

Conclusions on Proof of Concept: SAP ATTP

In the last blog post (http://www.pharmait.dk/index.php/blog/getting-ready-for-serialization-sap-att-system-requirements) we were describing how to prepare and scale environments for installation of SAP ATTP.

Now we can share the experience from SAP ATTP Proof of Concept.
The Proof of Concept (POC) was initiated to evaluate if SAP ATTP could meet specific Business requirements, and would be a better match than OER/AII. For the impatient reader the proof of Concept was successful and we are now continuing into the execute phase of the project.

Aim of the Proof of Concept was to show support for a full business flow by SAP ATTP:

Manage serial numbers

  • Maintain track of objects and events
  • Ensure integrated warehouse processes

Due to limited time the following area was de-scoped:

  • Integrate with production site solutions
    • De-scoped however we build a simulation tool with the tools supplied in SAP ATTP.
  • Integrate with 3PLs
    • De-scoped in Proof of concept
  • Perform regulatory reporting
    • De-scoped in Proof of concept
      • ATTP POC

Focus was on the integration between ECC and ATTP, and the functionality in ATTP.

First priority was to ensure that we had materials in our sandbox environment we could use for testing. We identified a suitable material for testing purposes and marked it as serialized and traced.

Then we transferred the material to ATTP from ECC, define and assign the serial number formats and ranges.

Following the toolkit for Warehouse integration was tested, by building one scanner transaction that enabled us to receive information from ECC and transfer that event to ATTP – enabling that the serials automatically changed status in ATTP and were marked as Shipped when posted goods was issued in ECC.

We also tested ttThe ESC flow (China) where the serials and master data is imported into ATTP.

Conclusion

ATTP offers a better and more flexible solution for handling serialization requirements than the old AII/OER. By building a basic solution, with competent support, we were able to create a full business flow of serials.  We have seen the integration between ECC and ATTP function in real time.

My next blog post will touch on organization and project approach when implementing SAP ATTP.

Additionally we can add that SAP is working to ensure that ATTP meets the Korean requirements for reporting.

Continue reading
3549 Hits
2 Comments

Should data or Document be leading as a source for IDMP data?

document data leadingIDMP might not only be about compliance to EU commission regulation it is also a journey towards better and more data control across business areas in the pharmaceutical industry.

As an example the IDMP data model for authorized medicinal product does not cover all data in the SmPC document and not all IDMP data is in the SmPC, but still for the data that is present both in the SmPC and the IDMP data model it is a valid question to ask – what source should be leading – data or document?

Before IDMP the document has been leading so question is if this should be changed with the implementation of IDMP.

The answer is it depends…

For the sake of simplicity we will continue this blog post by only addressing the example of the SmPC in respect to the data in the authorized medicinal product IDMP data model.

If your company have implemented or will implement structured authoring as part of implementing IDMP - data and document will be synchronized from the point in time where the text or a specific value in the document is created, changed or updated and tagged. Normally your structured authoring system will also propose what other documents that is using the same text section or the specific data value that was changed and should be considered for a similar updates – but structured authoring alone does not ensure alignment the regulatory workflow will also need to be considered.

As an example let’s discuss a variation or change that updates the clinical particulars with an additional adverse event/undesirable effect. In the regulatory workflow the SmPC is updated (ex. from version 3 to version 4) reviewed and approved from a document management perspective and send to health authorities in a variation package for regulatory approval. The current approved SmPC in the specific country is version 3 and version 4 is pending approval from authorities.

Even though the data and document is synchronized due to structured authoring and the data can be extracted automatically - it still needs to be controlled when to extract the data from version 3 or version 4 of the specific document in that country. The trigger will in this case be the approval of the version 4 SmPC from authorities. A possible source for this information could be your Regulatory Information Management System – RIMS or a similar system that contains a baseline of your regulatory approvals in each country. The system would need to contain a full list of your IDMP relevant documents as a baseline of your regulatory approval in respect to IDMP and the baseline would also need to be updated when changes/variations are approved by authorities.

If the regulatory baseline is controlled and structured authoring is implemented and both processes are well aligned - data can be considered as leading.

If data extraction cannot be automated via solutions like structured authoring a manual data entry solution must be made available. Making a reference to an earlier blog post discussing architecture choice the following options are available:

  1. EMA web interface
  2. RIMS
  3. IDMP application
  4. Data staging area (MDM solution)

If data is to be leading it would mean that data is created before or at least at the same time the document is created or updated and would push the data maintenance effort on to the author of the document to also maintain the IDMP data part before or simultaneously with creating, updating the document.

Without becoming to detailed option 1-3 can to our knowledge not handle multiple unapproved versions of your regulatory record in respect to the different regulatory variations that might be in process of being approved at health authorities at in the same time. Since the architecture does not support the needed functionality to support data as leading and data cannot be updated in the respective systems until the actual document is approved by health authorities - in this case document is leading.

If you are implementing an option 4 architecture you have to option to choose if data is leading or document is leading. As data then would have to be created by the author choosing data to be leading would mean that the manual data entry area in the data staging area would need to be made available to the author upon creation, update of the document. For the data that are to be send to IDMP via source systems like a RIMS system data updates would need to be controlled via tracking of Regulatory variations, but in either case it would mean that a full IDMP record matching the updated data set of the variations becomes available in the data staging area - in this case data is leading.

For architecture option 4 you can also choose to make document leading and in this case the data can be synchronized either when the document is approved internally or when the change/variation approval is received from health authorities.

As discussed above if data is leading update of IDMP is the responsibility of the document author. If document is leading you have the option to choose who is responsible for updating the IDMP data - in the example of the SmPC it will depend on where you want to place the responsibility and effort of doing the actual work of extracting the data and entering it to the manual data entry solution or source systems. A link should also be made to an earlier blog post discussing the maintenance strategy when collecting data from documents.

When deciding - current workload, system complexity and time to ensure the IDMP data capture must be considered. Possible options to consider could be:

  • SmPC author in the regional office of the specific country
  • Global Regulatory Responsible for the specific product
  • Supporting team responsible for data capture

Should you place the responsibility of someone not involved in the process of writing the document it is probably best to wait until the document has been reviewed and approved from a document management perspective. Whether they are informed manually or automated when a new IDMP relevant document is created is not so important as long as they are able to initiate extraction upon approval of the document via a solid process.

Whether you choose structured authoring, capturing data early or late and if manual data capture is done by local office, global regulatory personal or a data capture team depends on your system landscape, current organization and the companies preferred choice in respect to workload and data responsibility.

The disadvantage of an early data capture is that the data also have to be maintained. It is never a given that heath authorities approve the first proposed text for a variation so in case things changes the document will need to be updated.

Advantage of an early data capture is probably a more simple process and early availability of the data.

What you choose and where you place the responsibility depends on your preference in respect to having data or document leading.

Continue reading
1620 Hits
0 Comments

IDMP data collection from documents – Strategies towards IDMP compliance.

Once the IDMP data analysis have been completed and you have a good overview of where your IDMP data resides, it is time to start considering how you collect and integrate the different data sources into a complete IDMP data model.

This article assumes that data will be extracted from the source identified in the IDMP analysis. A different strategy towards IDMP compliance is of course the possibility to implement new processes or IT solutions that provides an even greater number of data fields electronically than has been identified in the IDMP analysis. One of these solution could be Structured Authoring – where Structure Authoring is implemented in a way where IDMP relevant data become easily available and can be transferred to the IDMP data model via interfaces to the structured authoring application. Implementing structured authoring is not a simple task but it will in the long run be the solution that best minimizes the total maintenance effort of maintaining the IDMP baseline. There are vendors on the market offering solutions for structured authoring so for now we will reference to these vendors for more information about structured authoring.

The figure displays a possible conceptual result from your IDMP data analysis for authorized products (the example is constructed).

IDMP data analysis

In the figure the analysis has categorized the data fields into the following categories:

Mapped to IT system: IDMP data field has been mapped 1:1 to a data field in an IT system, the data field might need to be transformed or changed to match future Controlled Vocabulary of IDMP. It might also be that the IDMP value will have to be calculated based rules in respect to one or more data values in the IT system in order to derive the IDMP data value.

Depending on the chosen IDMP architecture (which have been discussed earlier on this blog - link) the extraction of these data will be done either manually or via interfaces with or without specific calculation rules and merged into the IDMP data model to be used for reporting.

Field available in source system: IDMP data field has been mapped 1:1 to a data field in an IT system, but the data field is not currently being populated with data.

For the IDMP fields that can potentially be inserted into an IT system it needs to be evaluated if the existing processes/SOPs in the company needs to be changed to ensure capture of the additional data fields. If capturing of the data in the IT system is not the right solution, the data categorization should change to “No source identified”.

Mapped to document: IDMP data field has been mapped to data found in a document. For authorized products the main documents will probably be Summary of Product Characteristics (SmPC), CTD documents and other documents that in detail describe the content of the packaged medicinal product.

For the data fields found in documents the data will have to be extracted via manual or via automated processes, which will depend on the level of standardization of the individual document template within the company. For most companies an automated extraction will probably be too complex and cumbersome to implement compared to hiring a data extraction team to handle the data extraction. But the real issue with extracting data from documents is not the extraction itself, but more the following maintenance of the data from the point in time it is extracted until the IDMP reporting will actually take place. For most companies the number of registrations will be in the hundreds or even thousands, so planning for a data extraction track in your IDMP program with a duration of 6-18 month is very likely.

In Pharma IT we see the following possible strategies for maintaining the extracted data from documents:

Maintenance Strategy Description  Comments 
Triggered by Author Authoring procedures/SOPs for the relevant identified documents are changed/updated, so that the IDMP data capture team is informed every time the relevant documents are updated  If the author based trigger can be implemented as an automated trigger, this might be a suitable solution. But if the trigger is manual the process cannot be categorized as a solid process, as there would be a high likelihood that such a trigger from time to time will be missed leading to an IDMP baseline possible out of compliance.
Triggered by IDMP data collection team IDMP data capture team implements a procedure for checking if a document has changed/been updated compared to the IDMP baseline data extraction

An IDMP data collection team trigger has the advantage of not influencing the existing business processes, but since documents like Company Core Data Sheets and CTD documents do not change a lot for specific products, it will also mean that a lot of documents will be checked regulary even though the documents have not been updated.

In some Document Management Systems it is possible to bookmark documents so that you will be alerted for changes which might ease the implementation of such a process. But bookmarking means that you will be informed of all changes to a certain document, also the ones that are not relevant for the IDMP baseline.

Triggered by automated check against document management system(s) An automated check of the IDMP baseline is implemented against the available approved documents in the document management system(s)

If your company has chosen an MDM solution as part of your IDMP architecture, it is possible to implement an automated trigger against the Document Management System(s).

In the MDM solution a data capture functionality table can be created that also contains data about the authorized product, the registration, the document and the document section used to establish the IDMP data baseline.
Using the interfacing capabilities of the MDM solution data from the document management system and appropriate rules to ensure only considering documents of interest are applied to create an automated message to the IDMP data capturing team once a document that relates to the created IDMP baseline is changed.

One could even consider also including the links to both the old and the new version of the documents of interest to accelerate the manual data check of document for identifying if any of the data in the IDMP data baseline has changed.

No source identified: IDMP data field that during the analysis of different documents and IT systems has not been found and current status is therefore that the data field has no source identified.

The IDMP fields that has no source identified, can either be incorporated into new or existing documents with proper updated processes and SOPs to support the data generating process or new/existing IT systems can be expanded so they can capture the missing data. Which data that is handled in respect to each possible solution will have to depend on individual analysis and decisions. Once a decision has been made the maintenance of the data going forward should follow the processes explained above.

Summary:
Depending on your individual IDMP data landscape there are many possible solutions for creating a compliant data capture and maintenance process from documents and in choosing the right one for your company cost and benefits should be considered in respect to available architecture and the number of registrations and the possible max data volume (see earlier blog post about calculating max data volume - link).

Continue reading
1793 Hits
0 Comments

Quantifying IDMP effort vs. xEVMPD

We have earlier on this blog discussed the different architecture options (link) that are possible in respect to IDMP - but how do you choose the right one for your company?

We believe that many pharma companies currently are evaluating what approach to take in respect to IDMP implementation – should the decision to do a manual implementation as was the case for many companies for xEVMPD or should a digital solution be implemented with investments in software and consultancy support to handle the IDMP data. Most companies having more than a few products on the market should probably go the digital, but what are the hard arguments to support such a decision.

Most vendors are proposing to do a detailed IDMP analysis to find the source data within the company, but is that enough? In pharma IT we propose to do a quantitative multi-dimensional analysis of IDMP parameters vs. xEVMPD to evaluate the right choice your company:

  • Map system complexity (number of IT systems being source systems for IDMP)
  • Map number of data fields originating from documents
  • Calculate the pharma company specific max data volume for IDMP vs. xEVMPD
  • Estimate the company specific data availability in respect to the max data volume
  • Use the above values to describe how a manual solution would look like
  • Estimate costs of a digital solution vs. a manual implementation

Map system complexity

For most companies it is possible to identify the number of systems that contains the IDMP source data. These system will normally be found within the following areas:

  • Clinical Trial Management System(s)
  • Regulatory Information Management System(s)
  • Production Management System(s)
  • Pharmacovigilance System(s)

For xEVMPD most data originated from the Regulatory Information Management System for IDMP we usually see anywhere from 4-15 systems being in scope to supply IDMP source data for all 4 waves.

Map number of data fields originating from documents

Since most pharma companies in their detailed IDMP data analysis identify 40-60% of the data only being available in unstructured sources i.e. documents we also need to specify the number of fields that will be populated based on documents. If a Regulatory Information Management system was available at the time of implementation for xEVMPD most companies would only have a few data fields originating from documents in the xEVMPD implementation. We see this number growing from 10-20 data fields in wave 1 to over 100 data fields in wave 4.

Calculate the pharma company specific max data volume

All pharma companies have different product portfolios, which means that the individual effect of IDMP vs. xEVMPD will be different from company to company. We propose to analyze the product portfolio and identify what the average repetition of the IDMP data classes would be for your specific company and then sum this calculation for all data classes and compare to the same calculation for xEVMPD. Remembering models are always simplifications of the truth, so one should be careful how to use these numbers, but our calculations show that IDMP max. data volume is 6-9 times larger for wave 1 compared to xEVMPD and when wave 4 is implemented the factor rises to a number between 30-40 times larger than xEVMPD, not considering the ongoing addition of batch id’s and similar transactional data in the years to come.

Example of an partial max data volume calculation:
Below you see a small part of the IDMP data model for the Medicinal Product.

The max. data volume can be calculated in the following way:

Part of IDMP data model

Data ClassAssumptionData field in data classmax data volume calculationmax data volume
Medicinal Product The “Medicinal Product” data class will be repeated to generate the same number of records as xEVMPD – in this example we assume a number of 1000 records 7 1000 * 7 7000
Master File In most pharma companies the relationship between the “Medicinal Product” data class and the “Master File” data class will be 1:1 2 1000 * 1 * 2 2000
Medicinal Product Name In most pharma companies the relationship between the “Medicinal Product” data class and the “Medicinal Product Name” data class will close 1:1 unless your company have a large number of product that have been approved via Central Procedures in EU 13 1000 * 1 * 13 13000
Country/Language In most pharma companies the relationship between the “Medicinal Product Name” data class and the “Country/Language” data class will be close to 1:1 3 1000 * 1 * 1 * 3 3000
Total       25000

The same calculation can be done for xEVMPD by mapping the xEVMPD data fields in the IDMP data model. In xEVMPD we find fewer number of data fields in each class in the following way:

  • Medicinal Product – 2 data fields
  • Master File – 2 data fields
  • Medicinal Product Name – 7 data fields
  • Country/Language – 0 data fields

Which means that the max data volume under the same assumptions can be calculated to 11.000 for xEVMPD.

In pharma IT we have developed a tool that can make this calculation quite easily for all four waves.

Company specific data availability

If your company have done a detailed IDMP data pilot you might be able to extract information about how many of the IDMP data fields that are likely to be relevant for your products. Our current knowledge tells us that for xEVMPD the specific company data availability was between 90-95% of the xEVMPD data fields. For IDMP we expect a number in the range of 70-85%.

Our analysis tells us that the max data volume for IDMP for companies with 1.000 records in the “Medicinal Product” data class can be 400.000 data fields for iteration 1. The equivalent number for xEVMPD seems to be 40.000. Using the “Company specific data availability” factor these numbers should be adjusted:

  • IDMP: 400000 * 80% = 320000
  • xEVMPD: 40000 * 95% = 38000

By dividing the IDMP number 320000 with the xEVMPD number 38000 we can calculate that the IDMP vs. xEVMPD data volume factor to 8,4 – this is the factor also referenced in section “Calculate the pharma company specific max data volume”

Manual solution

When discussing a manual vs. a digital solution one should always try to outline the details of a manual solution vs. a technical solution. 

One way of implementing a manual solution is outlined below:

  • Create IDMP Master record in excel (in the example above we would generate 1.000 excel sheets)
  • Manually collect data from source systems and documents
  • Track cleansing, CV mapping, transformation of source data to IDMP standard in the excel file
  • QC data 
  • Enter data into IDMP web tool most likely to be provided by EMA 
  • QC data entry
  • Ensure ongoing maintenance of data

Remembering the calculation above - each QC steps would involve control of about 320.000 data fields.

Estimate costs

Final part of the multi-dimensional analysis would be an estimation of costs of both a digital and a manual solution. You might want to include more digital options in the estimation again referring to the possible architecture options for IDMP (link).

Concluding

Making a summary of the above multi-dimensional analysis might be sufficient information for chosing the right solution for your company – wether it is a manual or a digital implementation of IDMP. 

There are of cause many more factors that can be considered and we welcome any comments or additional ideas you might have or be able to add - this can only help in securing the best possible IDMP implementation in each and every pharma company.

Continue reading
2248 Hits
0 Comments

Getting ready for Serialization – SAP ATT system requirements

The Pharmaceutical industry is being met by regulatory requirements which make it necessary to implement Serialization. These requirements vary from market to market and many different technology vendors claim that they are able to meet these requirements.  Regardless, the Pharma company will be held responsible for storing and updating the serialization data throughout the supply chain and report this to the relevant authorities.

One of the software that can be used to support this process is SAP ATT. SAP ATT is scheduled for release on the 15th of September so official documentation is not yet available. Working with our customers we have been part of a PoC for SAP ATT and is able to share some preliminary insights on the technical configuration of SAP ATT.

Here is what we know in respect to system requirements – information is preliminary so if you need the official information you should wait for the official release from SAP.

 

Interfaces

To implement ATT there need to be a connection to your ECC system, this is necessary for setting up Material master interface, and WH transaction which will be reflected in ATT.

SAP ATT

 

The recommended basic architecture of SAP Advanced Track & Trace should look like the following.

  • SAP AIF 3.0 with a restricted licence for the SAP Advanced Track & Trace is most likely included in the planned architecture for the SAP Advanced Track & Trace system.
  • For the purpose of running SAP Advanced Track & Trace there is no need for installing the AIF add-on on the ECC system as well.
  • For interfaces PI is not necessary as SAP Advanced Track & Trace plan to deliver OData and SOAP webservices (and RFC calls for internal purposes) which can be consumed by most of the interfaces.

 

ECC Patch level

If you are using ECC 6, EHP 6 - ECC patch level SP8 or later should be sufficient for the ECC add-on of SAP Advanced Track & Trace. I do not know if earlier versions are being supported.

 

System Specifications

To our knowledge the system should be created with the following minimum specs:

  • 10.000 SAPS - with 4 CPUs // 24 GB RAM.

We have obviously not been able to test yet, but as a rule of thumb it needs to be on a comparable size with your corresponding ECC environments.

 

If you have comments or additional information please share.

Continue reading
4205 Hits
2 Comments

GInAS meeting in Uppsala - we should recognize the work being done/new GInAS frontend

My overall impression of the GInAS meeting is Uppsala is that FDA and EMA is working together with a lot of organizations and work groups to facilitate the implementation of IDMP as early as possible. The amount of tasks, clarification of standards and decisions on the best way to implement the IDMP standard is tremendous and even though progress i...
Continue reading
1308 Hits
0 Comments

GInAS meeting in Uppsala coming up

On the 7-8 of September I will be participating in the GInAS meeting in Uppsala - or more specific "Symposium on the identification of substances in medical products - An opportunity to discuss current challenges with participants & experts in sciences supporting the ISO 11238 standard".The Agenda for Monday is quite extensive and I am looking ...
Tags:
Continue reading
1121 Hits
0 Comments

Reflections on IDMP Program Planning

This article is the second in a series that will reflect on the activities that can or should be initiated now or soon in order to be prepared for the first IDMP reporting deadline in 2017. This second article reflects on high level IDMP Program planning.

Even though the scope for the first IDMP reporting is not fixed, the current plan from EMA is unconfirmed, the ISO standard is being updated and changed and the GInAS application/setup is not final - we are of the opinion that the below activities should be in progress or initiated very soon:

  • Selection of IT system/architecture and preparing/initiating an RfP/vendor selection
  • High-level planning of the IDMP Program
  • Initial Data Analysis to discover area’s for data capture and data cleansing
  • Investigating the GInAS system and start collecting structured substance data

Based on the knowledge that we have found by following webinars, participating in conferences and doing market research we will in this article propose a high level plan for the overall IDMP Implementation. The high level plan will also show that the current communicated timeline from EMA does not leave a lot of room for postponing IDMP activities must longer in respect to getting an IDMP IT system ready for the expected deadline for Iteration 1 in Q4 2017. The plan is based on the time estimates for implementation of IDMP system architecture discussed in our first article “Reflections on possible IDMP IT System Architecture options”.

Latest plan proposal from the European Medicines Agency to implement IDMP propose splitting full IDMP implementation into 4 iterations excluding veterinary medicinal products. In the figure below the first two iterations are shown and it has been communicated that the following waves be enforced with intervals of 6 months in prolongation of the plan shown in the figure.

EMA IDMP proposed plan

Figure 1: Plan proposal from the European Medicines Agency to implement IDMP (reference (slide 25))


As stated in the beginning of this article there many unknowns in respect to IDMP:

  • The scope for the first IDMP reporting is not fixed – which fields will be part of the first iteration?
  • The plan from EMA (see figure 1) is still only a proposal, so it can still be updated and changed before a final version is presented
  • the ISO standard is being updated and changed, lasts drafts have shown new fields, moving for fields between classes and new classes of data
  • the GInAS application/setup for structured substance is not final – there is a meeting in Uppsala on the 7-8 of September arrange by the GInAS organisation that might provide more clarity but for now all is still work in progress

Even though the deadline for reporting on the first iteration is not that far away – so how do you plan a IDMP program? Based on what we do know we have outlined the following planning inputs:

  • The IDMP architecture to be implemented must be capable of scaling from the data in Iteration 1 to the data in Iteration 2, 3 and 4.
    • One could use a two system strategy – using the web tool for Iteration 1 since iteration 1 should only be xEVMPD data and many pharma companies used EVWEB for the xEVMPD data reporting. This could be a way to move forward postponing the point in time where investments in new software need be made – but it will most likely also result in parallel work once you are two implement the data reporting for the 2nd iteration. Should a two system strategy be selected one should make sure that the needed resources a cut-over activity between the two systems are in place at the implementation of the 2nd iteration. 
  • Planning for an implementation at the last minute in Q4 2017 is probably not the best way to go - having a 3-6 month buffer in respect to the Q4 2017 deadline would be recommended.
  • Depending on the chosen IT system(s) and related activities the implementation schedule will be different. In our first article “Reflections on possible IDMP IT System Architecture options” we tried to give an overview of the different options of IDMP architecture that are or will be available on the market. Based on this article it is our impression that an IDMP IT system implementation can take anywhere from 6-18 month, but if you also consider time for a proper purchasing process you can add another 3-6 month for running a vendor selection process. If the IT system also needs an IDMP reporting component to be implemented separately we would recommend that you consider at least 3-6 months for this component somewhat in parallel with what would be the staging area implementation. In our program plan example we have proposed a 15 month implementation time for the IDMP IT system. Should you chose a more simple IDMP architecture which is also discussed in the article – you can reduce this activity in time. In the planning example we have timed the vendor selection and system implementation with the release of the draft implementation guides from EMA to ensure that software investment is not made until implementation guides are released.
  • Assuming that a scalable IDMP infrastructure have been selected and will be in place for Iteration 1, implementation of Iteration 2, 3 and 4 should be planned for sections of approximately 9 months leaving 3 months for analysis and design, 3 months for development and 3 months for validation activities for each iteration. Phases might be overlapping and executed in parallel.
  • The ISO standards are only undergoing small changes so it is our proposal that an IDMP data analysis on the data in your company’s current IT systems are mapped to the IDMP data model. This will give you an overview of how much data that you will be missing in order to be compliant to creating a report for iteration 1, 2, 3 and 4. This analysis effort could also contain a quality element looking at data for different products to create an overview of the data quality in each system to create some idea of what level of data cleansing and alignment that is needed. Given the size and complexity of your IT landscape such an activity can take anywhere from 3-6 months.
  • Pending on the result of the IDMP data analysis you IDMP program should following the analysis start a data capture, data cleansing activity – data process track. Each gap field or area identified in the analysis should be evaluated in respect to the EMA planned iterations and the current ongoing changes of the ISO IDMP data model and based on this prioritised for cleansing or capturing. We believe that an data capture and data cleansing activity will be ongoing from the end of the IDMP data analysis until the end of the program. Data capture and cleansing activities will most like be more focused on the data for the later iterations. We would think that area’s like detailed mapping of devices and device material for the data fields in package item container and package component of the ISO IDMP data model and MedDRA mapping for clinical particulars need special attention, but pending on your data there might be more or less area’s that will need focus. On the other hand the draft implementation guides are not yet detailed enough to support the initiation of the mapping work for clinical particulars so if you start now there is a high likelihood of ework later. The IDMP data analysis might also lead to identification of processes that needs to be created for future data capture and definition of data responsibility for the data area’s not currently capture in the available IT systems or documents.
  • The plan should also contain a track for collecting Structure Substance Information – we will assume that the GInAS system will be used for this part. It is not easy to estimate the duration of this activity. We have planned with about 12 months based on assumption that, some data collection will have to take place from unstructured data sources and knowledge will not be easily accessible and existing processes will have to be changed to ensure a good process setup for this kind of data in the company going forward.

Each of the above time estimates are guesstimates. That said the above planning reflections could be combined to create the plan shown below in Figure 2: IDMP Plan example based on planning reflections.

IDMP Plan example based on planning reflections

Figure 2: IDMP Plan example based on planning reflections

The plan example does not leave a lot of time for delay or postponing activities in the IDMP Program. The plan shows that activities like:

  • Selection of IT system architecture and preparing/initiating a vendor selection should be in progress or start soon
  • Initial Data Analysis to discover area’s for data capture and data cleansing should be in progress or start soon
  • Investigating the GInAS system and start collecting structured substance data can wait until early 2016, but it might be a good idea to monitor what happens in this area

Each Pharmaceutical Company should make their own estimates based on the individual IT system landscape, results of your own IDMP data analysis, check of data quality and alignment of data across the different IT systems in the company.

Hopefully our planning example will help you plan your own IDMP program and the activities that need to be completed before enforcement of Iteration 1, 2, 3 and 4.

Continue reading
1112 Hits
0 Comments

Reflections on possible IDMP IT System Architecture options

DisclaimerThis article is the first in a series that will reflect on the activities that can or should be initiated now or soon in order to be prepared for the first IDMP reporting deadline . This first article reflects on IDMP IT System architectures options.

Even though the scope for the first IDMP reporting is not fixed, the current plan from EMA is unconfirmed, the ISO standard is being updated and changed and the GInAS application/setup is not final - we are of the opinion that the below activities should be in progress or initiated very soon:

  • Selection of IT system/architecture and preparing/initiating an RfP/vendor selection
  • High-level planning of the IDMP Program
  • Initial Data Analysis to discover areas for data capture and data cleansing
  • Investigating the GInAS system and start collecting structured substance data

Based on the knowledge that we have found by following webinars, participating in conferences and doing market research we have found that at least four different options for IDMP IT system architectures are or will become available.

 

Simple Webtool from EMA:

webtoolAt the ISO IDMP Information Day held at EMA on 23 June 2015 - EMA representatives stated that a simple webtool like EVWEB would be provided for reporting of IDMP data.

Based on the current information the tool will most likely be cheap or free of charge and data will have to be entered manually like it is the case for the EVWEB tool.

Risks are that the tool will not be available until very late in respect to the EMA proposed implementation plan. The tool might have performance issues, like it is the case for the EVWEB tool, when larger amount of data needs to be entered. Data extraction options, reporting and review and approval workflows might not be very advanced, if at all available. Conditions for GxP use of the tool is unclear and might be put on the individual pharmaceutical company to implement.

For the first iteration the amount of work to implement the tool will probably be similar to the work done when xEVMPD reporting was implemented. Since only manual data entry is possible number of products, registrations and related IDMP data must be considered before choosing this solution as your IDMP reporting tool. One should also consider the full IT system landscape and from where the IDMP data needs to be extracted – what is the quality of the extracted data and how well is Master Data aligned with the other systems in scope for data extract. This will give some indication of the amount of manual work that will be needed for the full implementation.

Overall it is our impression that the web tool will probably only be useable for companies with only a few registrations. The amount manual work when considering data in iteration 3 and 4 will simply be too much if you have a larger number of registrations.

Should this solution be the right solution for your company a lot of implementation activities might not need to be initiated until after the release of the “Final EU Implementation guideline for Iteration 1” (scheduled for Q1 2016) and the final “EU Substance Implementation guideline for Iteration 2” (scheduled for Q2 2016).

We would estimate 6-9 month to handle the Implementation of the webtool in a GxP compliant way. This estimate does not include the needed data collection and data cleansing that needs to take place in order to be prepared to actually start the data entry once the surrounding GxP processes are completed.

 

Integrating IDMP into the Regulatory Information Management System:

RIMonlyA number of vendors are communicating that they will implement the IDMP data model into their existing RIM solution, we have at least identified the following vendors which are promoting this approach:

CSC (Regulatory Tracker)

Infotehna (RIMExpert)

Lorenz (DrugTrack)

SamarindRMS (RIM)

If you already have a RIM system from a vendor who is promoting this strategy, it will most likely be a path with less effort and costs to upgrade your existing system. This means that your existing maintenance organization and existing GxP baseline can be used for implementing a system that supports IDMP reporting.

On the other hand the RIM system may not be the best suitable choice for storing IDMP data. NNIT has written an article called “Five reasons why your RIMS system may not be suited for ISO IDMP” that lists the following reasons for not storing your IDMP data in existing RIM systems:

  • The enormous amount of data and its diverse sources, only 20-25% of ISO IDMP data will reside in the RIM system
  • RIMS is originally designed for another purpose, building ISO IDMP into a RIMS system is not simple, as there is no clear way of presenting the extensive set of ISO IDMP product data to the user in an application that was originally designed for a completely different purpose
  • There is a clash in the users of RIMS and IDMP submitters, the employees who access RIM to maintain a registration are not the same as those who maintain product data for submission to EMA. This tendency will be even stronger for ISO IDMP.
  • Risky interdependencies between ISO IDMP and RIMS, Building ISO IDMP into a RIMS system may jeopardise the stable operation of the RIMS system by necessitating further system updates in connection with changes in the ISO IDMP guidelines or HL7 messaging standards for gateway submission
  • Data integrations are indispensable for IDMP, the extensive set of data required for ISO IDMP will require data from a number of other systems in the company, and it is not likely that all this data can be managed manually (or it will be too expensive to manage it manually)

Please read the full article for more information.

In respect to data integrations there are highly advanced systems on the market for handling of Master Data that can interface to almost any system or database. These systems also have highly configurable workflow options that can be used for transformation and change of data from IDMP source systems to fit the IDMP data model and ensure alignment of data. At least consider the complexity of your IT system landscape, if your future RIM system can provide the needed interface functionality to other systems, or if you will need customized interfaces to each of your IDMP source systems. Also consider if the standard integration functionality that is provided by the Master Data Management systems on the market would be a better solution. Same considerations should be made in respect to the needed transformation and alignment of data to the IDMP data model and Controlled Vocabularies.

Should this solution be the right solution for your company you would probably need to plan an system update once your vendor has released the new version of the software that contains the IDMP data model. Given that the vendors will have to include the changing IDMP data model in their software release they can probably not initiate the final software release process until the “Final EU Implementation guideline for Iteration 1” have been released (Q1 2016). Depending on your current RIM vendor version and if an upgrade can be done without a large data migration effort the project could take anywhere from 6-12 months.

 

Implementing a separate IDMP specific system:

RIMandIDMPAt least two RIM vendors are communicating that they will develop a separate IDMP solution for supporting IDMP reporting:

ArisGlobal (agIDMP)

EXTEDO (RImanager/MPDmanager)

For the same reasons that it might not be a good idea to store your IDMP data in your existing RIM system – it might be a good idea to separate the data (see bullet list in the above section). If your RIM vendor will also include the interface to the RIM system you will get all the RIM data automatically in your IDMP database by choosing this system setup without the disadvantages of combining the data.

In respect to data integrations, data transformation and data alignment to the IDMP data model and Controlled Vocabularies you would still have to consider the capability of the offered IDMP solution compared to the interface and workflow functionality offered in the Master Data Management systems that are on the market.

Should this solution be the right solution for you company you could align such a system purchase with the release of “Final EU Implementationguide line for Iteration 1” (scheduled for Q1 2016) and maybe also with the release of the final “EU Substance Implementation guideline for Iteration 2” (scheduled for Q2 2016). Any activities needed to prepare the purchase could be completed before to optimize the remaining activities needed for IDMP implementation.

Interfaces to other source systems or manually input of data will still have to be configured (if you RIM vendor’s IDMP system has that option) or customized. Should a lot of data conversion be needed in respect to IDMP data residing in the non-RIM IDMP source systems this must also be taken into account either in the interfaces that will have to be customized/configured or changing the data in the source systems. We would estimate 9-15 month to handle the full implementation of such an IDMP tool.

 

Implementing a staging area and an IDMP reporting component:

MDMtoolVendors with in the Master Data Management market are also proposing solutions for IDMP. Not all solutions include components that can be configured for IDMP reporting so if such a vendor is chosen an additional IDMP reporting component must be added. At least the following vendors are proposing their master data management software as a possible solution for IDMP:

Informatica

SAP

SAS

Oracle

Should you chose to implement an IDMP specific reporting component basically any Master Data Management solution could be used to build the staging area shown on the figure – in that case the list is not limited to the vendors listed above.

Compared to the other IDMP architecture options these systems have out-of-the-box interface connections to almost any data source and they are highly configurable in respect to transforming the data and aligning them to the Controlled Vocabularies. They also have remediation workflows if data should not follow the defined path or quality so that warnings will be raised and correction can be implemented.

Currently not a lot of standalone IDMP reporting tools are available so this part might have to be customised if it is not included in the package from your Master Data Management vendor. Another option is to use the IDMP reporting components from ArisGlobal (agIDMP) or EXTEDO (RImanager/MPDmanager) to handle the specific IDMP reporting functionality.

On the downside these systems seems much more expensive than the systems offered by the RIM vendors. Larger support organizations will have to be established and some of the solutions - even though they are configurable - need persons with technical skills to be implemented and maintained.

What would be the right system for your organisation needs to be based on analysis of IDMP data, processes, IT system landscape and long-term costs comparing solutions including manual labour compared to more automated infrastructure.

Should this solution be the right solution for you company you could synchronize such a system purchase with the release of “Final EU Implementation guideline for Iteration 1” (scheduled for Q1 2016) and maybe also with the release of the final “EU Substance Implementation guideline for Iteration 2” (scheduled for Q2 2016). Any activities needed to prepare the purchase could be completed before to optimize the remaining activities needed for IDMP implementation. We would estimate 12-18 month to handle the full implementation of a staging area and an IDMP reporting component.

Continue reading
2157 Hits
0 Comments

How to setup your own GInAS installation for IDMP

With EMA's publication (slide 25) of their current plan to facilitate the implementation of the International Organization for Standardization (ISO) identification of medicinal products (IDMP) standards as with Article 57(2) of Regulation (EU) 726/2004 and considering that EMA has communicated that the first iteration implementation of the Product Management System (PMS) will mainly include xEVMPD data - the Substance Management System seems to be the part that contains the most complexity in getting ready for the first submission of IDMP data.

Currently the plan seems to be to use the GInAS system for the definition and submission of substance data to comply with the ISO 11238 standard Data elements and structures for the unique identification and exchange of regulated information on substances. 

The group in charge of developing the GInAS application is publiching all information on their web site. The web site contains a lot of presentations and video's from early meetings that can be used to get introduced to the GInAS application. 

If you need a fast guide for setting up your own cloud GInAS system just follow the guides and links below and that will give you a fast introduction to the GInAS application.

  1. Download lates software package from the GInAS web site
  2. Install GInAS via Amazon Web Services (follow the guide)
  3. Login to GInAS username: admin ; password: adminginas
  4. If needed you can watch the following video's to get familiar with the application:

Best of luck getting started using the GInAS application and preparing your company for the first submission of IDMP data.

GInAS

Tags:
Continue reading
1166 Hits
0 Comments