Accessibility Page Navigation
Style sheets must be enabled to view this page as it was intended.
The Royal College of Psychiatrists Improving the lives of people with mental illness

NAPT

Question mark

Service report FAQs


General questions

Q: The report has been emailed to me, but I really need a physical copy to read and share with the team. Can you provide this?

A: We are happy to provide a physical copy of either the national or your service report if needed. Please just email a member of the NAPT team who sent it to you, or the NAPT email address:

napt@rcpsych.ac.uk

If you need any more than one, we would ask that you print it yourself – we had over 350 participating services for the baseline, and unfortunately we do not have the time or budget to provide more than this.

 

Q: Who owns the NAPT data?

A: The data is owned by HQIP, the Healthcare Quality Improvement Partnership, which has funded the National Audit of Psychological Therapies. HQIP have received a copy of the national report from the NAPT team.

Any requests for additional analyses on the data, or reporting of the data, needs to come to NAPT who will in turn liaise with HQIP.

 

Q: Who has the information in the reports been shared with?

A: The local service reports are private, and have only been sent to the audit lead(s) for that service. The national report is publically available, and can be found on our website:

NAPT National Report 2011

 

Q: How can we best share the report information with service users?

A: There is a leaflet and a poster which summarise the main findings of the audit, and also a report for service users / members of the public which gives the results in a more detailed, but clear and readable way.


Questions about your service's reports

Q: My service had 92% data completeness for ethnicity, but our service report said that this was in the 39th percentile, and ‘Below average services’. I thought 92% was quite good – why do we come out as ‘Below average’? [NB: A similar question could be asked about their service user questionnaire results – see below]

A: The percentiles and quartiles show how your service’s performance for that standard compares with other participating services. If most services do well on a particular standard, then the distribution of scores across services may be quite small. Being in the 39th percentile, for example, just means that 39% of services score less than 92%, and 61% score higher, but the actual differences may be quite small.

This is true for the ethnicity coding (St 1a) and service user standards (St 7 and St 8), but for other standards such as waiting times (St 2 and 3) and NICE adherence (St 4 and 5) there is a wider distribution of scores. In this case, being in the ‘Bottom 25% services’ may mean that your service’s performance is significantly worse than most other participating services, therefore may merit some attention.

 

Q: Standard 1b says that we need to determine whether the standard has been met locally, and consider our data in the light of our service’s target population and local demographics. How do we do this?

A: Please see the action planning toolkit under Standard 1b. This details which questions your service should be considering when trying to determine whether your service has met this standard, and sources of information you can use such as the ONS website; the National Equalities in Mental Health Programme; and various IAPT resources.

 

Q: My report says that there was not enough data provided to calculate Standard 2 (waiting to assessment). Why is this?

A: In a lot of cases, this was because the data extract that was provided does not give all the dates we needed to calculate waiting to assessment i.e. both date of referral (Q11) and date of first appointment attended (i.e. date of assessment) (Q13). This was the case for several of the data extracts that came from CORE IMs, as the way that data is collected on this system does not make it easy to calculate an accurate date of first appointment attended.

 

Q: My service was not measured for Standard 4 or 5, as we do not give patients a diagnosis. Why were these standards not measured?

A: We understand that some services do not give a diagnosis; this might be for a number of reasons e.g. the service does not characterise a patient’s condition according to a ‘medical model’; or the therapists / workers employed by the service may not be trained to give a diagnosis.

This means that we cannot measure Standard 4, which looks at whether a patient has had one of the NICE recommended therapies recommended for their diagnosis; or if they have had the NICE recommended number of high intensity therapy sessions or ‘recovered’.

 

Q: The NAPT team defines recovery as ‘moving from caseness to non-caseness’ and reliable improvement was ‘determined by calculating the reliable change index for the relevant measure’. Why do you use these definitions? These would not necessarily be used by our service or by service users, who may have a completely different view as to what constitutes ‘recovery’.

A: We produced these definitions in collaboration with our partners at the Centre for Psychological Services Research, University of Sheffield, who help us with the analysis for the outcome measures standards (Standard 5 and Standard 9). These definitions are in common use by researchers who try to calculate the recovery rates for services by using a variety of different outcome measures. It can be quite a complex process to compare recovery rates when services may use different outcome measures.

We recognise that ‘recovery’ may mean something different to a service user, and this is why we have placed great emphasis in this audit on producing a service user questionnaire which includes questions about whether the outcomes of treatment were helpful to the service user.

 

Q: Our service uses a bespoke measure which is suitable for the patients in our service e.g. Older People, people with OCD. We submitted our pre- and post- treatment scores on this measure. However, our service report says for Standard 9b that ‘it was not possible to calculate this standard, as this service did not submit data on the common outcome measures used by NAPT to calculate recovery’. Why is this?

A: If your service submitted pre- and post-treatment scores on any measure, then this has been counted in Standard 9a in the percentage of patients with a complete outcome measure.

However, in terms of calculating recovery (Standard 9b), we had to use the commonly used outcome measures that we mention in the algorithm on pages 74 - 75 of the national report:

1. If both PHQ-9 and GAD-7 had been used then caseness was defined as above the cut-off on at least one of these

2. If they had not both been used, but there was a pre-treatment CORE score then caseness was defined as above the cut-off on CORE

3. If the above did not apply, the measure used depended on the primary diagnosis

4a. If the primary diagnosis was depression, a measure of depression was used with the following order of priority: PHQ-9, HADS-D, BDI

4b. If the primary diagnosis was an anxiety disorder, then a measure of anxiety was used with the following order of priority: GAD-7, HADS-A, BAI

This is so we can make valid comparisons between services which use different outcome measures. The outcome measures below have been used previously by researchers such as our colleagues in Sheffield to make comparisons. Some of the less common measures have not been used in this way, therefore it is not possible to make valid comparisons for services which only use these measures.

 

Q: Our service has scored ‘Bottom 25% services (1-25%)’ or ‘Below average services (26-50%)’ for several standards. We are a small service that has undergone many changes recently / had cuts in funding and / or staff. This report is only going to worsen morale amongst our staff, and lead to both commissioners and service users questioning the value of our service. How useful is this report to us?

A: We understand that there are a number of reasons why a service may not perform well on a standard. In some cases this may be because data collection or recording is not as good as it should be e.g. ethnicity recording, or recording of exact type of therapy provided. In other cases, it may be because the service is under real pressure of resources, therefore waiting times, for example, might be particularly long.

The NAPT team wants to support services to improve their performance, not to ‘punish’ services for poor performance. This is why we have produced the action planning toolkit, and will be running regional action planning events. If there are particular issues in your service which you would like support with, please contact us; we are hoping to identify services which do particularly well at a standard to be able to pinpoint why they are so good at a particular area, and help other services to use these ideas.

We hope that the reaudit which takes place next year will show that services have been able to make some improvement in areas that they were having problems with.

 

Where next...


 

NAPT, 4th Floor Standon House, Mansell Street, London, E1 8AA

Tel: 020 7977 4984 Fax: 020 7481 4831 Email: napt@cru.rcpsych.ac.uk

 

Login

Forgotten your password?

Click here to Register
The members' area is currently unavailable, we hope to resolve this shortly.

Make a Donation