Justifying examinations during an 'electronic' downtime

We have recently started implementing electronic justification. If we experience a complete system downtime (RIS/PACS/PAS) the areas which rely on electronic justification would suffer the most. It would take quite some time to get the RIS DR system up and running during which we would be without the vital requesting information (for already booked examinations).

What back up systems or workflows can be employed to ensure referral data is available during these type of ‘complete’ electronic downtimes? Thanks.

Hi Keith, we have a 2nd BC/DR server running a copy of our RIS (CRIS) and PACS (Fuji Synapse). The specification we put in for the system promised a 4 hour down time (between you and me and the rest of the forum, it is most probably going to take quite a bit longer, depending on the nature of the disaster).
We run a scheduled extract each 4 hours on our RIS which store appointments for the next 4 weeks
Are you expecting to continue as normal when RIS/PACS/PAS are down? We keep a stack of barbed wire and 3 tin hats for those events in our PACS office.

1 Like

Thanks Frank. I’ve been tasked with improving our major downtime procedures and in particular a prolonged outage of all three major systems (RIS/PACS/PAS). In particular I’ve been asked to find a way of keeping as much BAU work going as possible. For ad-hoc requests during the downtime we have paper requests to satisfy the need to justify acute work. However, I’m struggling to come come up with a solution for planned and already booked work in the systems - all of which are down. It may be that expecting that work to continue is unrealistic - but I just wanted to know if anyone else out there had come up with a solution to this which we could consider implementing. My order for tin hats and barbed wire has been placed.

1 Like

A daily scheduled data warehouse dump from RIS would not be too difficult to arrange - that would at least give you access to the day’s schedule, order details, scan protocols and justification, etc.

For a much more developed solution, we created a Diagnostic Support Tool at PA Consulting for Nottingham University Hospitals that matches patients on the waiting list to gaps in the diary (and has boosted utilisation as a result) - that would also help bridge a downtime gap as it is fed via the Trust’s data warehouse. If anyone is interested in learning more, then please feel free to reach out.

Hi Frank

Did Wellbeing help you implement these 4 weekly extracts or have you done this with a stat report and stored on trust server somewhere?

In CAM we have a DR backup and can switch to this if required within 30mims to 1 hour but there is no absolute safeguard against say a cyber attack replicating to both systems

Currently we each have a read only back of CRIS to a local server on site takes a copy at midnight of the entire application and appts up to that point, but you can’t use it as a working RIS

We are about to lose this functionality when we move to hosted Wellbeing datacentre in December and some trusts are nervous about this without our crutch …

Might order some tin hats ….

image001.jpg

So it depends what degree of failure you’re talking about? The failure of only the primary/live RIS should be handled pretty quickly by transferring to a backup server, which should always contain an up to date copy of the live system’s data. In this scenario, the repointing of IP addresses and interfaces shouldn’t really take more than an hour or so. In my experience (we’ve had a few similar episodes over the last decade…) if you’ve instead got a complete multisystem failure across your institution then I think you’ve got bigger things to worry about than having to catch up on some cancelled out-patient scans at a later date. Just trying to keep an emergency service running will be hard enough!

In a complete network failure scenario, you won’t have access to any live or backup/DR systems either. This will result in losing access to the clinical indication for the planned exam, prior imaging history, current renal function, allergy information, vetting/protocol information, etc. This makes it very difficult to safely and appropriately scan out-patients. In our BCP, we move to a full paper-based system, including expecting clinical teams to re-request scans for inpatients already requested/accepted, so we have a new copy of all the relevant information.

As mentioned above, if your systems support it then you could run a regular export of the next day’s appointment details overnight, but I’d guess you’d then have lost access to the network folder or cloud storage the file/data is stored on, unless you’ve come up with a backup internet access plan as well…

And if your scanners are anything like ours then they can’t store more than a couple of days worth of data anyway, which would limit the amount of ‘offline’ scanning you can do if it’s a prolonged downtime.

I would suggest you model a few main scenarios and come up with several proposals for each one, with varying costs and risks/benefits to see what is realistically achievable. IMHO trying to maintain the illusion that the NHS can run full clinical services in the case of a complete digital system failure when we’re now so reliant on technology is a mistake.

1 Like

Hi,

We have PACS based reporting. Our RIS sends details of a request to PACS at the point is it accepted onto the system. As comments are added to RIS these also update on PACS, so we can see full vetting comments on either system.

We have a DR solution on RIS, but I don’t believe we have tested it. However, our PACS server and storage is fully replicated in a data centre on another site. Failover time is less than 2 hours. The route our reports take back to ordercomms is via RIS, but verified reports are also available in our web based viewer, which can be launched either in stand alone mode or in context from our EPR portal or document management system. We still remain vulnerable to network outages and I believe we are still working on adding IEP nodes to our back up PACS so, until that is resolved, we would be unable to use IEP, although our web based viewer does provide access to imaging performed on other sites in our collaborative (and obviously they also have access to ours). We can also push images from other sites into our PACS via the web viewer if necessary.

image001.gif

No help from Wellbeing. In theory we would switch over to our DR server (copy of production) but just in case the untested switch over doesn’t go to plan, I thought it would be good to know who might show up the next day. As discussed in the chain, in case of a major disaster with no network etc, we would be frying bigger fish.

(I’ve tried to arrange a BC/DR test run a few years ago, but the full process could not be tested due to the risk of data loss).

Thanks all for your contributions. I have a few further ideas to explore now.