The MHC’s funding of AOD services is significant. It contracts 41 providers engaged through 115 contracts to deliver services to those seeking alcohol or drug treatment. The total cost of these contracts was $57.7 million in 2017-18. Around half the total expenditure goes to 11 providers for 19 contracts. It is important for the MHC to know that it is getting value for money.
Assessing the effectiveness of drug treatment is difficult at an individual or organisational level. At an individual level, this is because treatments do not aim at a cure but at long-term behaviour change. The 2 key accepted ways to assess whether treatments have been effective are to measure completion of treatments and to collect client self-assessments. At an organisational level, it is difficult because treatment does not have a defined pathway or end point. It is also expected that people will use services more than once over varying periods of time.
The MHC manages its relationship with individual providers well
As outcomes are difficult to measure, contract managers need to maintain good current knowledge of, and close relationships with, their contractors. The MHC has long familiarity with its service providers and understands the field well. This is helped by professional expertise and relatively high levels of staff continuity. As a result, they manage individual providers well.
The MHC’s contract managers:
- conduct regular reviews of each contract in line with Key Performance Indicators (KPIs)
- ensure providers comply with major requirements of their contracts such as maintaining accreditation, maintaining appropriate insurance and continuing to offer the service defined by their contract
- manage any critical incidents with providers
- respond to any queries that the provider may have about their contract or to discuss any other isues
- understand the providers’ businesses.
The MHC collects a lot of information but does not analyse it to ensure service effectiveness and value for money
The MHC collects comprehensive information, by provider, on every episode of treatment, including completions and self-assessments. Most providers input this information directly to the MHC’s data system, known as SIMS. One provider does not share data at this level.
The MHC monitors and reports this information in aggregate terms. However, it does not use this information to assess how people use treatment services, the performance of service providers or to predict future demand for services.
The MHC collects data from service providers and works with service providers to address issues when data collection requirements are not met. However, not all providers share their data and the MHC does not analyse all the data it collects to the extent possible. It has not conducted a comparative assessment of treatment service providers to understand what services are most effective and give value for money.
Specifically, the MHC collects and reviews:
- client outcome and output information from every provider
- client satisfaction information from every provider
- contract KPI information from providers
- waiting list information for CADS and, since January 2018, residential rehab.
However, we would also have expected the MHC to:
- use benchmarks to enable cross-service comparisons
- analyse outcome data to assess the relative effectiveness of services it funds
- verify that clients are receiving the services that they need
- analyse data about individual treatment pathways
- compare client post-treatment self-assessment data from different providers.
Also, it does not collect data on:
- how many times individuals return for treatments over what periods of time or if they attend more than 1 facility
- treatments per individual per provider
- comparative completion rates of providers
- waitlist differences between providers in different locations
- how long people are on residential rehab waitlists.
This makes it difficult for the MHC to assess if providers are meeting clients’ needs effectively and efficiently. In turn, this limits its ability to improve the mix of services offered by providers.
The MHC is not using service KPI information as effectively as it could to improve the planning and management of treatment services
The MHC relies heavily on contract KPIs and targets to assess provider performance. However, the targets vary between providers and the MHC does not always enforce contract requirements for reporting. This indicates that there is no clear benchmark for performance. In turn, this limits the assurance the MHC can have that the system is performing well.
Each contract the MHC has established with providers includes a consistent set of activity based measures. Many of them mirror the factors in the client assessments on SIMS, but measured as percentages of treatment episodes, for example, percentage of clients reporting improved physical health.
While the measures are consistent, the targets are not, limiting the MHC’s ability to use them to compare performance. For example, the target for improved mental health of clients of non-residential counselling ranges from 40% to 80%. It is not clear if this is based on any unique quality of the service or client base, and suggests that providers might be interpreting the same measure differently. Further, the MHC has not required all providers to report against all the measures.
While the MHC generally manages providers well, its current contract templates have no penalty terms that it can use to drive improvements. The MHC also has no guidance or policy for contract managers to deal with providers who may not meet performance requirements. This means it could do little if providers did not meet contract expectations.
The MHC relies on its experience and understanding of treatment models, how long-established providers work, and academic research to inform its contracting. It also requires external accreditation of providers to assure service quality. While this has been accepted by MHC to date, we saw no evidence that the MHC evaluates the suitability of accreditations or that providers consistently meet the standards within them. Given other information weaknesses, this limits its assurance that services are meeting client needs.
While the MHC has collected CADS data for a number of years, it only started systematically collecting rehab waitlist information from January 2018 and not all providers are reporting it. Our analysis of CADS wait times showed much longer waiting times for some services than others. We did not see any evidence that the MHC has addressed these differences.