How to Run a 360 Feedback Process: Step-by-Step Guide with Free Template
Last updated: March 2026
Most 360 feedback processes fail before the first survey is sent. The failure isn't in the questions , it's in the setup.
Without clear objectives, genuine anonymity, and a credible plan for what happens after feedback lands, a 360 process becomes a compliance exercise. Raters give diplomatic answers. Recipients read their results once and file them. Managers don't know what to do with the data. Six months later, nothing has changed, and the next time someone suggests running 360 feedback, half the room says "we tried that."
This guide walks through how to do it properly. Not the abbreviated version, not the slide deck version. The full process , objective setting, rater selection, survey design, analysis, feedback conversations, and development planning , with a 35-question template you can adapt and use.
If you're also evaluating performance management platforms to run this process digitally, see our guide to the best performance management software for Australian SMBs.
What 360 Feedback Actually Is
360-degree feedback, also called multi-rater feedback, collects input on an individual's performance from multiple sources: their manager, peers, direct reports, and sometimes external stakeholders. The feedback recipient also completes a self-assessment. The combination surfaces patterns and blind spots that no single perspective can capture.
The output is typically a report showing how different rater groups perceive the individual across key competencies, with self-ratings compared against others'. The goal is development, not judgement.
That last sentence matters more than it sounds. The biggest structural mistakes in 360 design come from treating the process as an evaluation tool rather than a developmental one. When people think their 360 results will affect their compensation or promotion, raters give inflated scores, recipients become defensive, and the data loses its value. Keep development and performance rating processes separate.
When to Use 360 Feedback (and When Not To)
Use it for:
- Managers stepping into new roles or expanding their scope
- High-potential employees preparing for promotion
- Leadership development programmes and executive coaching
- Identifying interpersonal patterns that a manager alone can't see
- Culture initiatives where self-awareness and team dynamics are central
Don't use it for:
- Termination decisions , the data is too subjective and the process creates legal exposure
- Organisations without psychological safety , if people don't trust the process, you'll get surface-level responses that mislead rather than inform
- Situations where there's no follow-up plan , collecting feedback without acting on it demoralises raters and recipients alike
- Fast performance judgements , 360 takes 6-8 weeks to administer properly, which is too slow for urgent performance management decisions
The core question is simple: is the goal development, or is it judgement? If it's judgement, use objective-based performance reviews instead.
Step 1: Define Objectives and Scope
Before designing anything, document what this 360 process is for.
Who is receiving feedback? All leaders, a specific cohort (emerging managers, senior leaders), or individuals nominated for coaching programmes?
What competencies are you assessing? Generic leadership behaviours, or competencies specific to your organisation's values and role requirements?
How will results be used? Development conversations only, or will managers see summaries for succession planning purposes?
Document this in a one-page brief before building anything. It sets expectations for every stakeholder and makes the process defensible if questions arise later.
Timeline: 1-2 weeks
Step 2: Select Participants and Raters
How many feedback recipients: First cycles typically include 10-20 people. Starting with senior leaders or a specific cohort (rather than rolling out to the whole organisation at once) reduces execution risk and allows you to refine the process before scaling.
Raters per person: Aim for 8-15 raters, structured across:
- 1-2 managers or supervisors
- 4-6 peers at similar level
- 3-6 direct reports, where applicable
- 2-3 external stakeholders (clients, partner teams), where relevant
The minimum threshold for any rater group is 3 responses. Below that, individual responses become identifiable, even with anonymity controls. If someone has only one direct report, omit that rater group from their cycle.
How to select raters: The feedback recipient nominates people they work closely with. Their manager reviews the list and nominates additional raters. The goal is a mix of perspectives , not just supporters, but also colleagues who will give honest developmental input. Avoid nominating people with obvious conflicts of interest.
Timeline: 1-2 weeks
Step 3: Prepare the Feedback Recipients
Brief every feedback recipient before the survey launches. This step is consistently underinvested and consistently explains why some recipients become defensive when results arrive.
Cover the following:
- What 360 feedback is and what it isn't (development tool, not performance evaluation)
- Who is providing feedback (rater groups, not named individuals)
- What the questions cover
- How anonymity is protected
- What happens after , the feedback conversation, development planning, and any manager involvement
Make it a conversation, not a briefing. Invite questions. The recipients who ask the most questions during prep are usually the ones who get the most from the process. The ones who don't ask questions are often the ones who spiral when they see their results.
Timeline: 1 week
Step 4: Communicate to Raters and Open the Survey
Rater participation rates are directly correlated with how well you communicate the why. Generic survey invitations get generic completion rates.
A strong rater invitation includes:
- Why their input matters , "Your feedback helps [name] understand how they show up in the organisation"
- Confidentiality , individual responses are anonymous; results are aggregated by rater group
- Time expectation , the survey takes 10-12 minutes
- A firm deadline , 10-14 days from opening is the right window
- A reminder scheduled for day 7
Anonymity needs to be structural, not just promised. Use a platform that separates rater identity from responses at the data level. Analysing response timing, language patterns, or role details should not allow someone to identify who said what. Platforms like Praxiss handle this automatically. Spreadsheet-based processes don't.
Timeline: 2-3 weeks including reminders
Running this process manually is workable for a first cycle. If you want to automate the setup, anonymity controls, and AI-native analysis, start your free 14-day trial at app.praxiss.io.
Step 5: Monitor Response Rates and Follow Up
Target at least 80% completion from each rater group. Below that, the data starts to skew.
If a specific group is underperforming , say, only two of five direct reports have responded , investigate before closing. The causes are usually benign (missed invitation, busy period, unclear instructions) but occasionally signal a real problem: direct reports who don't feel safe giving feedback are telling you something important even by their silence.
A quiet individual follow-up , not a broadcast reminder , is the most effective intervention for low-response rater groups. If a group still won't complete the survey, omit them from the results and note the gap in the analysis.
Timeline: Week 2-3
Step 6: Analyse Feedback and Prepare the Report
Once the survey closes, the analysis work begins. The questions you're trying to answer:
Consistency: Do all rater groups see this person similarly, or are there significant gaps between how their manager rates them versus how their direct reports experience them?
Strengths: Areas rated highly across multiple rater groups. These are relative strengths, not perfect areas. Build on them.
Development priorities: Areas where most raters agree there's room to grow. Two or three themes are actionable. Ten themes are paralyzing.
Self-awareness gaps: The most developmentally significant data point is usually the difference between self-rating and others' ratings. A large gap in either direction , rating yourself much lower or much higher than others do , is worth exploring in the feedback conversation.
Outliers: One rater who is significantly more critical than the rest of their group. This could be a personality conflict, a legitimate concern, or a rater who misunderstood a question. Outliers don't determine the narrative, but they're worth noting.
Reports should use visuals where possible , radar charts, grouped bar charts, and word clouds from open-text responses make patterns visible faster than tables of numbers. Most dedicated platforms generate these automatically.
Timeline: 1-2 weeks
Step 7: Prepare Recipients for the Conversation
Before feedback recipients see their results, help them frame the experience. Many people approach 360 results with anxiety, and anxiety makes them defensive. Defensive people reject data that conflicts with their self-image, which means the most useful information , the blind spots , gets dismissed.
Key coaching points to deliver before the feedback conversation:
- Feedback reflects how people perceive you. Perception is data, even if you disagree with it.
- Look for themes, not individual comments. One critical response is a data point. Three people saying the same thing is a pattern.
- Separate intent from impact. You may not intend to come across as dismissive, but if multiple raters describe your communication that way, the impact matters regardless of intent.
- This process isn't a verdict. It's input for a development plan.
Some organisations provide a short 1-on-1 coaching session before the formal feedback conversation. The purpose is to process the emotional response to the data privately, so the feedback conversation can focus on action rather than defensiveness.
Timeline: 1 week before feedback conversation
Step 8: help Feedback Conversations
The feedback conversation is where 360 feedback either creates lasting development or becomes a demoralising experience. It's not a debrief. It's a conversation.
Who should help:
- An external coach or trained HR professional is ideal , creates psychological safety and separates the development relationship from the management relationship
- The person's manager works if the relationship is strong and the manager has coaching skills
- Self-guided reading of results is rarely effective , people misinterpret data, fixate on criticism, and lack the support to build an action plan
Structure (approximately 60 minutes):
Set the frame (5 min): "This feedback is meant to support your development. It reflects how colleagues perceive you in specific situations , it's not a verdict on who you are."
Self-reflection first (10 min): Ask what surprised them, what resonated, and what they disagree with. Let them speak before interpreting the data.
Explore patterns (15 min): Focus on consistent themes. "Three of your peers mentioned that you're hard to read in high-pressure situations. What do you think is happening there?"
Discuss strengths (10 min): Don't skip this. People absorb critical feedback better when they've first been asked to acknowledge what's working.
Identify development priorities (10 min): Ask them, don't tell them. "If you could work on one or two things in the next six months, what would have the biggest impact?"
Next steps (5 min): Draft the shape of a development plan. Commit to a follow-up date.
Timeline: 1-2 weeks
Step 9: Co-Create Development Plans
Development plans that get used are the ones people have genuine agency over. A plan handed down from HR rarely gets traction. A plan co-created during a feedback conversation usually does.
Use the SMART framework as a structure:
- Specific: "Improve how I communicate context and rationale before asking the team to take on unplanned work" is better than "communicate better."
- Measurable: How will you know it's improved? Agree to ask two or three trusted colleagues for informal feedback in three months.
- Achievable: One or two development priorities at a time. Not five.
- Relevant: Connected to the person's current role requirements and career goals.
- Time-bound: A specific review date, typically six months out.
Development activities worth considering: executive coaching, stretch assignments (leading a cross-functional project to build collaboration skills), peer mentoring from someone strong in the competency, or a targeted short course. Document the plan and schedule quarterly check-ins to maintain momentum.
Timeline: 1-2 weeks
Step 10: Monitor Progress and Repeat
360 feedback is most effective when it's repeated. Research consistently shows that the behaviour change from a single 360 cycle is modest compared to the change that occurs when people go through two or three cycles over three to five years.
The recommended interval is 18-24 months between full cycles. That's enough time for genuine behaviour change to show up in how colleagues experience someone, while maintaining enough momentum that the development goals stay alive.
During the interval, quarterly check-ins between the recipient and their manager keep the development plan active. At the 12-month mark, a mid-cycle conversation , "how is progress tracking, and do we need to adjust the priorities?" , prevents development plans from sitting in a drawer until the next formal cycle.
When the second cycle runs, use the same core questions to track change, involve some of the same raters for continuity, and share progress data from the first cycle so people can see their own growth. That visibility is what converts 360 feedback from an HR process into something people actively want.
Timeline: Repeat every 18-24 months
The 360 Feedback Template: 35 Questions Across Six Competency Areas
Use this as a starting point and adapt it to your organisation's values and role requirements.
Respondent instructions: "This questionnaire gathers perspectives on [person's name]'s strengths and development areas. Your honest input is valued and will remain confidential. The survey takes 10-12 minutes. Rate each statement on a scale of 1-5: 1 = Strongly disagree, 2 = Disagree, 3 = Neutral / No opportunity to observe, 4 = Agree, 5 = Strongly agree. For open-ended questions, specific examples are most useful."
Section 1: Leadership and Management (6 questions)
- This person provides clear direction and sets expectations effectively.
- They make timely decisions and take responsibility for outcomes.
- They create an environment where people feel trusted and empowered.
- They advocate for their team and remove obstacles to progress.
- They lead by example and model the behaviours they expect from others.
- They communicate the reasoning behind decisions, not just the decisions themselves.
Section 2: Communication and Influence (6 questions)
- They listen actively and seek to understand different perspectives before responding.
- They communicate ideas clearly, both verbally and in writing.
- They adapt their communication style for different audiences and situations.
- They share information openly and don't create unnecessary information gaps.
- They receive feedback without becoming defensive.
- They persuade others through evidence and reasoning, not pressure.
Open-ended: Describe a time when this person communicated something complex or difficult particularly well. What made it effective?
Section 3: Collaboration and Teamwork (6 questions)
- They work well with people across different teams and levels.
- They contribute their share on team projects and don't rely on others to cover for them.
- They help others succeed, not just themselves.
- They handle disagreement constructively without making it personal.
- They recognise and acknowledge others' contributions.
- They actively seek input from quieter voices and create space for different perspectives.
Open-ended: Describe a situation where this person collaborated effectively. What made it work?
Section 4: Problem-Solving and Decision-Making (5 questions)
- They analyse problems thoroughly before jumping to solutions.
- They consider multiple perspectives and options before deciding.
- They make decisions even when information is incomplete or ambiguous.
- They learn from mistakes and adjust their approach.
- They encourage new approaches and are willing to challenge established ways of doing things.
Section 5: Technical and Role-Specific Skills (5 questions)
Adapt these questions to the specific role. For a software engineer, ask about code quality and technical leadership. For a sales manager, ask about pipeline discipline and client relationships.
- They have strong expertise in their area of responsibility.
- They stay current with relevant trends and best practices.
- They produce work that meets or exceeds the expected standard.
- They share their expertise and help others develop their skills.
- They take genuine ownership of outcomes and delivery.
Section 6: Growth Mindset and Development (5 questions)
- They actively seek feedback and act on it.
- They remain open to change when circumstances shift.
- They take on challenging assignments and don't avoid situations where failure is possible.
- They invest in their own learning and development.
- They help others grow and share credit for shared successes.
Open-ended: What is one area where you've seen this person grow or improve meaningfully over the past 12 months?
Optional: Self-Assessment Section (5 questions)
Include this when feedback recipients complete their own survey separately from receiving others' feedback.
- I understand my strengths and where I add the most value.
- I'm aware of the specific areas where I need to develop.
- I actively seek feedback and use it to change my behaviour.
- I adapt how I work based on how others respond to me.
- I invest consistently in my own learning and growth.
Survey Design Notes
Keep the total survey to 30-40 questions. Beyond 40, response quality drops significantly , raters rush, answers become superficial, and open-text responses get shorter. Pilot the survey with one or two volunteers and time it. If it takes more than 15 minutes, reduce the question count.
For rater groups that don't interact with someone on certain dimensions , for example, peers who don't observe someone's management style , mark those questions as "not applicable" rather than forcing a rating.
For external stakeholders, focus the survey on client-facing competencies: responsiveness, reliability, professionalism, and the quality of communication.
Common Mistakes and How to Avoid Them
Promising anonymity without delivering it. The most corrosive 360 failure mode. If raters can be identified through their role, their language, or the timing of their responses, trust collapses and participation in future cycles drops. Use a platform that strips identifiers at the data level. Test your own process: can you tell who said what by reading the responses? If yes, it's not anonymous enough.
Not preparing recipients. Feedback recipients who see their results without preparation frequently become defensive or demoralised. A 30-minute briefing session before results land is worth more than any feature of the survey design.
Too many questions. A 50-question survey produces worse data than a 30-question survey. Raters rush. Answers get shorter and less honest. The data loses its signal.
No follow-up plan. Collecting feedback without a plan for what happens next is the single most common reason 360 processes generate cynicism. Before launching, confirm: who helps feedback conversations, by what date, with what development planning support? If you can't answer those questions, delay the launch.
Using 360 data for high-stakes decisions. The moment people believe 360 scores affect their promotion or pay, raters inflate scores and the data becomes useless. Be explicit about how results will be used and keep 360 firmly in the development lane.
Final Thought
The difference between a transformative 360 process and a demoralising one comes down to three things: genuine trust in the anonymity of the process, credible support for recipients before and after feedback lands, and a real plan for what happens next.
The organisations that embed 360 feedback into their development culture see leaders become more self-aware, teams become more direct with each other, and the gap between "how I think I come across" and "how I actually come across" narrow over time. That narrowing is the work. The survey is just the mechanism.
If you're looking to reduce the administrative load of running this process , particularly at scale or across multiple cohorts , Praxiss automates the setup, anonymity controls, and analysis, with AI-native summaries that surface themes across large volumes of open-text feedback without requiring an HR analyst to read every comment. See how it works at praxiss.io/features.
Start your free 14-day trial at app.praxiss.io and run your first 360 cycle with AI-native analysis and automated anonymity controls.
FAQ
How is 360 feedback different from a standard performance review? Performance reviews are typically manager-led and evaluative , they assess whether someone met their objectives and inform decisions about pay, promotion, and role fit. 360 feedback is multi-source and developmental , it surfaces how colleagues experience someone, with the goal of building self-awareness and guiding growth. The two serve different purposes. Most organisations benefit from running both: 360 for development, performance reviews for evaluation.
Do we need to share 360 results with the feedback recipient's manager? It depends on the stated purpose of the process. If 360 is purely developmental, keeping results between the recipient and their coach or HR partner protects the integrity of the data and encourages honest rater input. If leadership development is connected to succession planning, a manager might see a high-level summary rather than individual responses. Whatever you decide, communicate it clearly before the process starts. Changing the rules after feedback is collected destroys trust.
What do we do if someone receives very critical feedback? First, make sure they don't see the results alone. A trained HR professional or coach should be present for the initial review. The key reframe is distinguishing between perception (what the data shows) and identity (who the person is). Critical feedback in a 360 report means some colleagues are experiencing someone in a particular way , it doesn't mean the recipient is a bad person or that their manager has already decided to act. Help them identify patterns, separate specific feedback from general criticism, and move quickly into development planning.
How often should we run 360 feedback cycles? Every 18-24 months for the same group. That interval gives enough time for real behaviour change to show up in how colleagues experience someone, while maintaining enough momentum that development plans stay alive. Some organisations run annual cycles for high-potential or senior leader cohorts. Annual cycles are viable, but they can start to feel like constant evaluation, which erodes the psychological safety that makes honest feedback possible.
Can we run a 360 process with a spreadsheet and email? For a single feedback recipient, yes , it's workable. For 10 or more people, the administrative burden becomes significant and anonymity controls are genuinely difficult to maintain. Manual collation of open-text feedback is time-consuming and introduces bias in how you summarise it. At scale, a dedicated platform is more reliable, faster, and structurally safer for anonymity.