'PHYSICS' testing of monitors

So, for those who dont spend all day tied to the Medical Physics discussion boards, the question of routine / daily QC of monitors has come up again
As has the impact of the update of the AAPM reports (from TG-18 to TG-270)

Of course, the Physics realm is tied in to IPEM91 which really goes back to PACSnet as was…but there is no process in place for this to be revised just because the ‘source’ material has been updated.

So, a set of open question to you as ‘users’ of the displays … I think I will set each up as a post so that you can respond to whichever you think appropriate

Thanks for indulging me

2 Likes
  1. Do you think that there is really any role for ‘physicists’ in display monitoring in the modern ‘real world’ of reporting
  1. How soon after a report such as TG-270 is released by AAPM would you have expected it to be adopted in the UK?
  1. Would you rather have your monitors checked by ‘the supplier’ with their access to software and testing tools; or is there still some comfort in some independent testing.
  1. If there is still some role for physicists, is this best fulfilled by
    a) leaving them to do what they want
    b) getting them involved with the professional standards issuing
    c) moving towards a multidisciplinary body to take the lead on this whole topic
  1. Do you think the automatic QC systems that ‘self test’ the monitors is now so reliable that nothing else is really needed as long as things are ‘all OK’

Oh, and for full disclosure - this is what I posted over on the Medical Physics discussion board:

WRT: TG-18 vs TG-270

((This is all me and no old member PACSnet or any other blue cover agency has been harmed in the writing of this post ))

It seems that a few sites have gone to the newer standards … esp where the PACS managers have decided to adopt the latest standards ahead of the curve. It would seem to be working perfectly well enough for them and I am sure posting over on that forum again now would prob show even more adopter from the radiology lead services.

*With the ‘physics hat’ on … from my reading of the TG270 reports - and in particular the follow up journal article on the practical implementation - the same LN images are still perfectly good enough, for the sort of routine testing that we are likely to be carrying out - 18 point fit to the curve is a reasonable balance for systems that are behaving … *
if you need to go beyond that, then will that really be us involved any more — it is either going to the the PACS supplier coming in or someone just buying a new monitor and downgrading the older on.
As much fun as it would be to fit 255 points to the curve, will it actually make any real difference - what will we see then we don’t see within 18 points: esp considering a large number of the system we will be testing are locked down to the point that running ImageJ macros to generate the new images will be a pain. That said, I am a huge fan of the stand alone tool from the Henry Ford group - check it out if you haven’t already.
Of course there will be some issues where people rely on the manufacture’s own test tool software as the AAPM group has deprecated the SMTPE image for testing - although that has no direct impact here I assume until someone ‘official’ says something.

I think that the biggest impact in terms of testing and managing the new regime will be the impact of the increase room illuminance; a lot of people seem to have seen the new figure without fully comprehending the impact on the lower end specifications. The key test there being that the base level screen output should be x4 (I would say x6) the reflected light.
In most cases, with the modern screen reflecting as it does, that lower end limit of minimum 1 nit in the RCR Guidelines is still prob good enough unless you have a particularly bright room.

One thing though - please all stop doing the ‘CX’ test on LCD screens. I mean WHY !!!

Also is it that much harder for people to do a 9 reading uniformity than a 5 - given the physical sizes of screens now, that I would think is a small but worthy change.

So in summary I feel the situation is

Moved to TG270- prob not enough have yet
Why not moving - the current system in effect discourages it
What will change - not much apart from a few more uniformity measurements

########################################################################

Of course there is a role for physicsts in calibration of display devices. The display device has to represent the data in a way that is meaningful to the informeatiton displayed. The accuracy of this has to be measured. Any measure of the physcal world needs to be appraised and assesed and kept up to date in term of “what is being measured”. It also needs to be independent of manufactuers - otherwise how can you compare brands. Radiologists and reorting users have huge amounta of knowledge, but very few get into the physcics of displays (I would say the same for any imaging based -ology: pathology, cardiology, opthalmology …).

Radiologists and reporting users need to rely on professional organisations to ensure what they see is the best representation of the underlying physics as possible.

I am brand new to this group and this is my first post. My take on this topic is that unless you are a staff member at a hospital, you likely are not going to be using the 270 patterns anytime soon. The vendors have done a great job with the calibration softwares (MediCal QAWeb, Medivisor, RadiCS, etc.)… Until they decide to switch to 270, I don’t see this going anywhere really. Now to the topic of do physicists need to be involved…although the automated programs are great they still require a) human intervention (usually there is a visual evaluation requirement of a SMPTE pattern or TG18-QC pattern) and b) appropriate personnel for when automated tests or visual tests do not pass. This is the value of having the medical physicist involved! Personally, I think anyone with relevant certification and experience in DICOM/imaging informatics would likely be equally qualified as well. The value of the physicist is also apparent if one aims to go above and beyond (e.g. such as characterizing luminance ratios from workstation 1 to workstation 2 etc. in order to standardize performance across a fleet of displays).

My two cents

1 Like