The iAbacus® Improvement Model

Introduction

This paper explains The iAbacus® Improvement Model and describes the unique professional development process it represents.  The Model charts the process that builds an organisation’s capacity for success.  The process underpins the model by using collaborative inquiry, or action research, to empower individuals and teams, as they self-evaluate and plan.  The paper concludes by detailing the seamless link between the iAbacus® Improvement Model, the process and The iAbacus® on-line self-evaluation and improvement tool.

The iAbacus® Improvement Model emerged from proven education and business developments in the late 1900s and early 2000s. Unusually and deliberately, it combines the emotional intelligence required in effective self-evaluation, coaching and mentoring with the rigours of criterion referenced inspection and review.  There are seven stages in the model which are surfaced by asking by four, simple but key questions:

1.  How well are we performing now?

2.  What evidence justifies this judgement?

3.  What will help and hinder our success?

4.  What are we planning to do next? 

For ease of understanding the 4 Questions are listed against the relevant stages in the Model.

1. Make your JUDGEMENT – How well are we performing now?

One unique feature of the iAbacus model is that it starts with the professional making an initial, intuitive, judgement about their current level of performance, using their nous (practical intelligence). This classic coaching, or quality assurance stance, places ownership and responsibility firmly in the individual, or team’s, purview. This is founded on decades of experience, demonstrating that the professional, in situ, is likely to know and understand more than anyone else about the special local circumstances influencing his, or her, practice. It also makes a critical assumption, not always present in inspection and review systems, that the individual and team do understand both their level of performance and the reasons it may be so. Whilst this is a counterintuitive starting point, for some, involved in assessing quality, it is supported by evidence from research indicating that self-evaluation is the most effective starting point for learning, especially when followed by feedback, based on criteria and evidence. In short, The iAbacus Improvement Model is a learning and capacity building process.

Most other evaluation, or improvement models, use a quality control approach commencing with data and evidence gathering. Ironically, this is because these approaches were often designed by and for external evaluators, or inspectors, who require evidence, to make their judgement.  It should be remembered that quality control models are not primarily for learning purposes – they tend to be regulatory. The iAbacus model is a learning model.  Therefore, it starts from the perspective of the self-evaluator who, as we explained, is best placed to know the detail of their local circumstances.

Because the iAbacus is a learning model, it is premised on a view that wiser external evaluators, collaborators and leaders will suspend their own judgement and only use it, when appropriate, in feedback, to strengthen the capacity of the professionals involved in self-evaluation and improvement planning. This is a subtle but crucial point founded on an understanding of the factors that underpin sustainable progress. In short, the iAbacus Improvement Model describes a process of validated self-evaluation.

 

2. Check it against CRITERIA

This stage of the iAbacus model requires individuals, or teams, to check their initial, intuitive and sometimes norm referenced, judgement against agreed criteria and make appropriate adjustments to ensure their original judgement was accurate and criterion referenced, before moving on. At this stage, the individual’s personal view is put in a wider, and often national context. This not only raises awareness and expectations but also influences, or validates, their professional perspective.  It can also raise their confidence and certainly informs their judgement making as the foundation for improvement.

Research shows that many, if not most, will underestimate their level of performance, or achievement, even when given the descriptors and detailed criteria for the judgement. Others will overestimate. Requiring judgements to be verified, at this stage, leads to two clear benefits: it challenges both pessimistic and optimistic self-evaluators and reinforces the value of meaningful success criteria, competencies, targets and measures.

The need for, and use of, agreed criteria is hotly debated. Some believe that meaningful progress requires the individual, team and organisation to develop their own criteria, rather than mutely accept given criteria. Not offering starter criteria, in early use of the model, confused some users. Consequently, later uses included the use of criteria sets, or measures.

It is worth stating here, before we describe the on-line iAbacus, that a clear benefit of the on-line iAbacus is the provision of prepopulated sets of criterion for users to use, “as is”, or,  “bespoke” for their circumstance.  Individuals can accept the given criteria, modify it, augment or replace it – the choice is theirs. Blank templates, without criteria, are offered for users to create their own criteria in workshop, or collaborative mode.

3. Justify it with EVIDENCE – What evidence supports our judgement?

Collection of evidence is deferred to this stage to challenge self-evaluators to justify their judgement by matching selected evidence to specific criteria. This mitigates the tendency to collect masses of evidence, or data mountains, to be mined later, before being able to make judgements. The relationship between judgement, criteria and evidence forms the foundation for improvement, or success planning. Experience shows models that rely on external evaluators (e.g. auditors or inspectors) to find and select evidence, set it against known criteria in order to make judgements, can disempower and demotivate those subject to the evaluation, who would often come to similar judgements. Taking away the right to, or credence of, self-evaluation is, in the view of many school improvers, anti-professional. Furthermore, it can and does, insert a skepticism into the accuracy of judgements made.

IMPORTANT NOTE: In many self-evaluation, inspection and quality control approaches (for example The School Self-Evaluation Form used in England up to 2011) the staged process ends here with the judgement made, justified with evidence and some identifying of key issues. The iAbacus Improvement Model, differs by continuing on, past descriptions of “where we are” in relation to, “where we need to be,” into an analysis of “Why is this?” and “How might we progress?” This encourages the individual, team or organisation to, not only evaluate performance, but plan and act, to make progress.

4. ANALYSE – What will help and hinder our progress?

Analysis is the heart of the iAbacus model and process.  Here, a unique and detailed analysis of prevailing factors is undertaken. People and resources that have helped, or may help, progress are identified and set against other and opposite forces that hinder, or may become barriers to improvement. Once those under the control of the individual, team, or organisation have been identified, they can be prioritised and action planning can begin. This has proved to be a powerful and effective approach used in mentoring, coaching and consultancy, because strengthen the skills of analysis within an organisation. Once again, analysis places the professional, not their line manager, or reviewer at the centre of the process. This is not to say that these ‘significant others’ advice is not to be sought, used, or even offered. Merely, that the key aim remains the professional’s’ capacity to be self-reliant and self-commissioning. A key benefit at the analysis stage is the challenge to individuals and teams to identify and deal with all aspects within their control.  The toughest stage for the self-evaluator is in facing these, their, issues.

5. ACTION PLAN – What are we planning to next?

Action Planning is the most understood stage within the iAbacus model. Here, priority helping factors (people, knowledge and resources) are harnessed and strengthened. In addition, plans to weaken the hindering factors, or remove barriers, are detailed. Most users will be familiar with a range of proformas to aid planning. Or research shows that key elements in an effective format include: What is to be done?  Who does it? (Which may include resources) How it will be measured? And When will it be done by?  Planning is made more precise, in the iAbacus model, by ensuring success criteria are present at start, when making judgements and the end, when planning.

6.Take action and EVALUATE impact – How well are we performing now?

This familiar element incorporates the implementation stage of planning and development models where plans to achieve the desired success are carried out. Stressing EVALUATION at the Take action stage is important, in order that that there is a focus on the impact of ongoing actions. Learning from action research and collaborative inquiry suggests that a mindset of, “Looking at what we do with a view to doing it better next time, as part of the job” is far more effective than one which sees evaluation as a bolt on extra, or something quality controllers do at the end of a process. The former signals personal accountability and builds professional responsibility, whilst carrying out required functions.  The latter tends to encourage a lower motivation and personal investment whilst awaiting the judgement of more powerful others about success, or otherwise.

The iAbacus model stages are complete but not the overall process because the process must be cyclical, if it is to be successful.  We return to judgement, “Where are we now? (after our actions)”, believing that further cycles will enable more progression and hopefully, lead to success. In this way, the evaluation of impact is a return to making a new judgement.  The cycle is repeated making the process of improvement planning and achieving success more likely.

7. COLLABORATE and DISSEMINATE

Collaboration and dissemination are placed, out of the sequence, at the centre of the iAbacus model, to stress their importance.  There are several opportunities, in the cycle, for collaboration to inform and enhance decision making. So, a good process will include colleagues becoming involved in: checking criteria; coaching, mentoring and advice giving around evidence selection and supporting or challenging an individual, or team member’s, views at the analysis and planning stages.

The model is rooted in a professional learning, action research and collaborative inquiry approach and so ways to disseminate learning and progress in the form of reports, papers, demonstrations, CPD and social media are also included in order to inform a wider professional community of practice. 

The on-line iAbacus self-evaluation and improvement tool

John Pearce developed The iAbacus Model in the early 2000s using it, initially, as a paper based process for school improvement, in schools of each phase and type across the UK. Sliding beads (left to right) on a physical abacus to visually represent progress proved to be a powerful and popular addition and gave the model its name. Then, In 2011, John began a collaboration with Dan O’Brien of Opeus.com, an established software company, to create the iAbacus on-line self-evaluation and improvement tool for Education. The thinking, theory and research behind the original iAbacus Improvement Model informed development of the software and each step in the original model has been faithfully retained, with only minor changes of wording to ease navigation. Although tempted by sophisticated possibilities in the software version John and Dan resisted complexity in a firm belief that the power is in the model and so the simplicity must be retained in the software. The on-line iAbacus was officially launched in May 2012 and, within months, was nominated for a Bett Award, becoming finalist in BETT 2014 see www.iabacus.co.uk  At the time of writing John and Dan are researching and developing an on-line Business and Leisure version of The iAbacus

References

The models, theories and research that influenced The iAbacus Model include: Kolb’s Learning Cycle, The Boyatsis Model, Kurt Lewin’s Force Field Analysis, Peter Senge’s 5th Discipline “Systems Thinking”, Gerard Egan’s Diamonds, John Hattie’s synthesis of research and 6th conclusion on feedback, Heron’s facilitation model, Hazel Taylor’s Tactical-Strategic-Capacity Building Model, John Pearce’s PANINI Model, Ofsted Inspection handbook 1994-2013 and Michael Fullen, Alma Harris, John MacBeath and many others’ writing on Evaluation, Action Research and Collaborative Inquiry.

 

 

 

FREE Trial

See the iAbacus® model in action for yourself