Pharma IT Blog

Blogging about hot topics related to Pharma and IT. Please use the mail icon to subscribe to new blog posts.

Laser back surgery silicon valley - Dr. Shirzadi is a neurosurgeon who specializes in the comprehensive treatment of brain tumors, vascular abnormalities, and spinal disorders (including minimally invasive spinal surgery, spinal tumors, spinal trauma, as well as adult spinal deformity). He partakes in a patient centered practice where he believes that a strong physician-patient relationship ultimately determines the best outcome.

Veterinary syringes - At GetSyringes.com, we take a customer centric approach by putting the customer first to provide our clients with premier service. We offer the fastest shipping methods available, secure and easy to use online purchasing , and only the highest quality products available.

Multiple sclerosis tucson - Dr Moen Din, M.D., received his Neurology training at Downstate University Medical Center in New York, one of the busiest hospitals in New York.

Skin tightening tucson - Tucson cosmetic is one of the best center of excellence for Thermage skin tightening treatment in New York and provides detailed, unhurried and honest consultations.

Conclusions on SAP ATTP implementation

We have been through a long a demanding project and is currently in hyper care.

I have tried to summarize the key points from the project, I hope they might be able to help if you are planning a similar project.

It’s been more than a year since my last blog post(http://www.pharmait.dk/index.php/blog/conclusions-on-proof-of-concept-sap-attp) and it’s time to give you an update on the project which I have had the privilege of leading.

Last time we had just finished the POC for SAP ATTP and had found that the solution did indeed meet the customer’s requirement. So, since then we have been in implementation mode.

We are now live with the solution and have been so for a little over a month.

To summarize our learnings, start with end users early - Serialization requirements take some time to familiarize you with, and the sooner the input from the users the easier it is to implement in the design. A universal truth from all IT projects, also applies to Serialization projects. J

Project phase

The implementation of SAP ATTP is not in itself an overly complex IT project. The serialization requirements, the ones already know, are straight forward, and SAP has ensured that ATTP meets all of them. It even in their license model, so no big surprises on the requirement side.

Solution Design

It’s important to know your SAP ECC implementation. If you are to implement SAP ATTP successfully, you will most likely need to update you RF scanner transactions.

The only reason not to do this is if you are exclusively producing for countries which does not have Serialized & Traced requirements. Even in that case, I would strongly recommend updating you RF scanner transactions to meet these requirements, later.

In my opinion SerializationSerialization requirements must be viewed as a funnel, all markets begin with 2D data matrix, then shortly transition to Serialization, then aggregation and in the end reporting. As you are already implementing SAP ATTP, my recommendation is to ensure that ECC is ready as well.  Otherwise you will need to run another project when your first market requires aggregation.

So, with these considerations out of the way we updated the RF transaction to support, all the existing warehouse processes, for serialized and Traced. That means that whenever we scan a pallet (or shipper box, or even bundle) ECC will check against ATTP if the quantity in ECC is the same as the serials in ATTP.

This will of course result in less flexibility in warehouse and production, as an example they can no longer use, MIGO for changing ware house status. They must use our custom build ones.

However, it also means that we have complete control of our serialised and traced products from Process Order Release until Issue of Goods.

That has reduced the need for reconciliation reports between our L3 system and ECC, as we now have complete control of what is produced and transferred from L3 and what is received in SAP.

Interfaces

Se picture below for the interfaces we developed.

 Integration

We developed interfaces from ECC to the L3 system for transfer of material master data and process orders. This is not related to serialization as these could just as well by inserted manually, but it eases operation with less risk of manual errors.

We also developed a serial number Request/response system from L3 to ATTP, this is a synchronous interface. We went live with ATTP 1.0 in 2.0 this can be done as asynchronous, which in most cases is the preferable solution.

We of course also have a commissioning interface between L3 and ATTP, for serialised materials it after the production of the batch has been completed. For serialized and traced it’s after each pallet. This is necessary, as we need to start ware house transaction before the production of the full batch is complete.

The final interface we developed was toward our 3PL in Korea, as Korea has requirement for Serialised and traced, this is an interface which sends the full hierarchy of serials to our partner in Korea. This is the most complex interface as it needs to handle multiple interactions, for instance samples, scrap, and returns,

Validation approach

IQ in SAP projects is simple, we checked the transport list in each environment we went through, as we were not changing environment configuration.

OQ, we did a full OQ of all functional requirements, this included all the scanner transactions and the interfaces, on our side. We did not OQ the interface on the 3PL side, but did a parallel OQ for the L3 system. This part was delayed and ended up postponing our PQ. The reason for the delay was the L3 supplier’s development effort to create the interface took longer than anticipated. They had only file exchange interfaces as standard, as we needed a synchronous interface for serial number request response we needed a web service.

After the OQ we conducted a full PQ, we had a production line available and tested a full process flow with, Non-serialized, Serialized, Lot based, serialized & Traced and non-finished goods.

This showed a challenge in setting up master data, as we had numerous production sites, but only 1 L3 site server, we could not test on all the sites materials.

We also had issues with getting LOT based material tested, and ended up descoping this from the PQ, as we currently have a manual solution in place, and the serialization requirements will be effective from November 2017 in the USA.

The PQ was the first time our end users tried the system hands on, this process showed that we needed a lot more training in serialization in general, and SAP ATTP specifically.

It also resulted in some minor changes to the design we had made. As well as changes to local procedures.

I would strongly recommend doing an end to end test with a real-life line if possible. This will reduce the number of issues found afterwards significantly. Especially Master data and Authorizations should be focus areas.

Cut over

We planned with a technical go live 3 weeks before the functional go live. This was an installation of SAP ATTP and implementation of the SAP notes needed in ECC.

This was to give us time to configure ATTP outside a closing window.

In the weekend we went live, we started installation on the 3 site servers Friday evening, and begun SAP installation Saturday afternoon. This was to keep SAP open for as long as possible, and because we needed to run the L3 installation 3 times, for that we needed more time.

We experienced a lot of issues during the cut over, primarily because of master data and authorizations. We saw issues with the materials where we had open process orders, this should normally not be an issue, but if you have the option, I would ensure everything is closed before starting the go live activities.

Hyper care

Since go live, we have had a high double digits number of incidents and about 30% is still open.

The high incidents we have had has been with SAP but not related to Serialization, and with L3 system. We have had incidents where batch data was not transferred from ECC to ATTP for serialized materials. This is an issue with ATTP as far as we can analyze, and we expect a note to fix this.

We have had a lot of issues with Master data, and authorization, and some issues with the interfaces from ECC to L3.

I would strongly recommend having people on site on all production sites when you go live, as working with serialization requires significant knowledge of changed processes. 

Continue reading
979 Hits
1 Comment

Should data or Document be leading as a source for IDMP data?

document data leadingIDMP might not only be about compliance to EU commission regulation it is also a journey towards better and more data control across business areas in the pharmaceutical industry.

As an example the IDMP data model for authorized medicinal product does not cover all data in the SmPC document and not all IDMP data is in the SmPC, but still for the data that is present both in the SmPC and the IDMP data model it is a valid question to ask – what source should be leading – data or document?

Before IDMP the document has been leading so question is if this should be changed with the implementation of IDMP.

The answer is it depends…

For the sake of simplicity we will continue this blog post by only addressing the example of the SmPC in respect to the data in the authorized medicinal product IDMP data model.

If your company have implemented or will implement structured authoring as part of implementing IDMP - data and document will be synchronized from the point in time where the text or a specific value in the document is created, changed or updated and tagged. Normally your structured authoring system will also propose what other documents that is using the same text section or the specific data value that was changed and should be considered for a similar updates – but structured authoring alone does not ensure alignment the regulatory workflow will also need to be considered.

As an example let’s discuss a variation or change that updates the clinical particulars with an additional adverse event/undesirable effect. In the regulatory workflow the SmPC is updated (ex. from version 3 to version 4) reviewed and approved from a document management perspective and send to health authorities in a variation package for regulatory approval. The current approved SmPC in the specific country is version 3 and version 4 is pending approval from authorities.

Even though the data and document is synchronized due to structured authoring and the data can be extracted automatically - it still needs to be controlled when to extract the data from version 3 or version 4 of the specific document in that country. The trigger will in this case be the approval of the version 4 SmPC from authorities. A possible source for this information could be your Regulatory Information Management System – RIMS or a similar system that contains a baseline of your regulatory approvals in each country. The system would need to contain a full list of your IDMP relevant documents as a baseline of your regulatory approval in respect to IDMP and the baseline would also need to be updated when changes/variations are approved by authorities.

If the regulatory baseline is controlled and structured authoring is implemented and both processes are well aligned - data can be considered as leading.

If data extraction cannot be automated via solutions like structured authoring a manual data entry solution must be made available. Making a reference to an earlier blog post discussing architecture choice the following options are available:

  1. EMA web interface
  2. RIMS
  3. IDMP application
  4. Data staging area (MDM solution)

If data is to be leading it would mean that data is created before or at least at the same time the document is created or updated and would push the data maintenance effort on to the author of the document to also maintain the IDMP data part before or simultaneously with creating, updating the document.

Without becoming to detailed option 1-3 can to our knowledge not handle multiple unapproved versions of your regulatory record in respect to the different regulatory variations that might be in process of being approved at health authorities at in the same time. Since the architecture does not support the needed functionality to support data as leading and data cannot be updated in the respective systems until the actual document is approved by health authorities - in this case document is leading.

If you are implementing an option 4 architecture you have to option to choose if data is leading or document is leading. As data then would have to be created by the author choosing data to be leading would mean that the manual data entry area in the data staging area would need to be made available to the author upon creation, update of the document. For the data that are to be send to IDMP via source systems like a RIMS system data updates would need to be controlled via tracking of Regulatory variations, but in either case it would mean that a full IDMP record matching the updated data set of the variations becomes available in the data staging area - in this case data is leading.

For architecture option 4 you can also choose to make document leading and in this case the data can be synchronized either when the document is approved internally or when the change/variation approval is received from health authorities.

As discussed above if data is leading update of IDMP is the responsibility of the document author. If document is leading you have the option to choose who is responsible for updating the IDMP data - in the example of the SmPC it will depend on where you want to place the responsibility and effort of doing the actual work of extracting the data and entering it to the manual data entry solution or source systems. A link should also be made to an earlier blog post discussing the maintenance strategy when collecting data from documents.

When deciding - current workload, system complexity and time to ensure the IDMP data capture must be considered. Possible options to consider could be:

  • SmPC author in the regional office of the specific country
  • Global Regulatory Responsible for the specific product
  • Supporting team responsible for data capture

Should you place the responsibility of someone not involved in the process of writing the document it is probably best to wait until the document has been reviewed and approved from a document management perspective. Whether they are informed manually or automated when a new IDMP relevant document is created is not so important as long as they are able to initiate extraction upon approval of the document via a solid process.

Whether you choose structured authoring, capturing data early or late and if manual data capture is done by local office, global regulatory personal or a data capture team depends on your system landscape, current organization and the companies preferred choice in respect to workload and data responsibility.

The disadvantage of an early data capture is that the data also have to be maintained. It is never a given that heath authorities approve the first proposed text for a variation so in case things changes the document will need to be updated.

Advantage of an early data capture is probably a more simple process and early availability of the data.

What you choose and where you place the responsibility depends on your preference in respect to having data or document leading.

Continue reading
1620 Hits
0 Comments