
Choir Feedback Surveys
About the Seattle Trans & Nonbinary Choral Ensemble (STANCE)
The Seattle Trans And Nonbinary Choral Ensemble (better known as STANCE) is the first—and only—choir in the Seattle area for transgender and nonbinary singers. STANCE's mission is to provide a vocal community free of gendered expectations, by and for Transgender and Nonbinary singers, to explore and express themselves through music. Founded in 2022, the organization has quickly grown to have 75 active singers, which is a surprisingly large organization given its recency and niche.
Goals
The objective of this project was to obtain feedback from the STANCE Board, singers, and staff about the 2024 Spring concert cycle. We needed to assess staff performance, the organization's overall engagement and performance with regards to our mission and values, identify areas where the organization is strong, and gain an understanding of where additional support is needed. Through discussing the project needs with the Executive Director and the Board Co-Chair, we decided to conduct three surveys, collecting interrelated data.
Project Overview
The Executive Director of STANCE decided in spring of 2024 that it was a good time to conduct feedback on the performance of the paid and volunteer staff of the organization. I was the Singer Liaison for the choir, which means that I as I participate in the choir, I listen to the feedback and concerns that singers bring to me and, when appropriate, bring those to the attention of the Board of Directors and staff. I was preparing an end-of-cycle survey for singers regarding their experience singing with us, and I added the staff performance review assessment because some of the questions would overlap.
Role
Lead Researcher
Project Planner
Tools
Google Docs, Sheets, & Slides
Miro
Team
1 researcher/project planner
3 SME consultants
Timeline
May - July 2024
8 weeks
Project Plan
I began the project by creating a project plan for all three surveys:
-
A board survey, with questions about the performance of individual staff members
-
A singer survey, with questions about the Spring concert cycle, and feedback about STANCE staff as a group, with an option to write in answers if singers had specific feedback about individual staff members
-
A staff survey, with questions to reflect on their own performance and that of their peers
My project plan included audience and role definitions, timelines, and methodologies. I first presented my plan to the Executive Director and Co-Chair, and once I received the go-ahead for my plan outline, I began collaborating with the people I needed to consult with me on the project. My primary collaborators were the Co-Chair (in their professional capacity as a human resources manager), the board Secretary, and a member of the choir who is a research subject matter expert (SME).
Survey Drafting
I created the initial questions in separate Google Documents, which would then be copied into a Google Form for response collection.
Our Co-chair provided me with the basic questions that they generally use for conducting performance feedback, and I crafted my questions based on those. Some of the questions overlaped between all three surveys, and some were unique to each survey. The surveys were conducted via three separate Google Forms. Questions were a mix of Likert-scale multiple choice questions and open-ended questions. Likert-scale questions included an “other” option for those whose thoughts were not represented in the provided array of options. Because the singer survey asked for a roll-up evaluation of all staff (rather than evaluating individual staff members), the "other" option provided an opportunity for singers to give specific feedback on individual staff members. This option kept an already-long survey brief, while giving the flexibility for more granular answers to those who wanted an opportunity to provide them.
I utilized our research SME to ensure that the questions we asked followed best-practices in bias-reduction and clarity. Our Secretary also provided invaluable feedback, as he works professionally as a music teacher and has a deep understanding of what it takes to be successful leading a group of musicians with different levels of experience.
The Co-Chair was able to see the answers of all three surveys.
Data Collection
In addition to the Likert-scale multiple choice questions that I asked, there were also 40 open-ended questions across all three surveys. Those, combined with the write-in "other" answers that were options for participants who were not satisfied with the multiple-choice options provided, totaled 1,016 individual qualitative comments that needed to be organized and themed into usable, actionable feedback for leadership. Having analyzed research data many times in my career, I utilized the data organizational skills that I have refined over time to meet this large data-management challenge.
3
Surveys
53
Participants
76%
Singer completion
87
Questions
47
Multiple-choice
1k+
Comments
I created charts in Google Sheets to visualize singer participation in the feedback process, so that I could communicate relevant differences in answers based on their vocal section, concert cycle attendance, and/or participation in our music education course or Outreach ensemble.
Data Analysis
My data analysis process relies strongly on color-coding that helps me quickly move through large sets of data. Prior to the answers coming in from survey participants, I set up my Google Sheet to utilize the Conditional Formatting feature to help me easily assess the raw data at a glance. Each response that came in via the Google Form was then automatically coded by sentiment in addition to other survey participant data that I used to organize participant feedback.
When survey participants submitted their answers in the Google Form, they automatically appeared in the connected Google Sheet. From there, I copied the content of each cell in their response row into Miro, where it was color-coded to represent the demographics meaningful to the project (singer section, board member, or member of staff). I applied tags to each sticky note for further organizing, coding, analysis, and theming. Each sticky note received a number that tracked to the row in Google Sheets where all of an individual's answers were recorded; this enabled me to track the data efficiently if I wanted to look back at all of that person's feedback to better understand the context of their comments.
Since I was sharing the data with the Board Co-Chair, I created a key with examples so that they could better understand how I was working with the data.

As I was creating the sticky notes, I also tagged them with some of the same conditional formatting that I applied in Google Sheets, such as their attendance, and participation in our outreach ensemble (Cantes). If the comments pertained directly to one of our staff members (Executive Director, Artistic Director, or Assistant Artistic Director), I applied tags identifying the sticky as such. This allowed me to search by tag in Miro later, and collect all of the feedback that was unique to one of our staff members so that I could include it in the staff performance assessments.
Sentiment Analysis
Many answers were relatively straightforward for sentiment analysis just on the basis of the questions themselves. For example, answers to "What did you particularly enjoy about this concert cycle, that you like to see us continue?" tended to be positive and answers to "What can we improve, start, or stop in future concert cycles?" tended to be negative. Sentiment coding on the multiple choice questions was easy, because the majority of the questions were framed as agree/disagree or positive/negative.
The most valuable question we asked turned out to be "What contributed to your experience (good or bad)?" That one question unearthed an incredible amount of actionable data by the time I was finished with my analysis. It was also the most challenging for sentiment analysis. Many answers to this question contained both positive and negative feedback. The way I approached this in my Miro analysis process was to break the comments apart so that I could code them separate as either "positive sentiment" or "negative sentiment." That allowed me to capture all of their feedback, but categorize them in a way that would be useful for me when I was theming the findings.
Theme Identification Technique
Throughout my theming process, I utilized both closed and open card sorting. I organized my Miro board by the questions that appeared on each survey, using them as section headers. Since the Singer survey had multiple sections which routed survey participants to different questions, this enabled me to track their answers to just the sections and questions that were presented to them. This structure was useful for identifying themes as a card sorting activity.
"Card sorting is a research method in which study participants place individually labeled cards into groups according to criteria that make the most sense to them."
Open Card Sorting
For the open card sort, I entered the data from Google Sheets into individual sticky notes, I placed the sticky note under the relevant question headers. I placed text above the headers regarding the content (or theme) of the stickies, and bolded the text if I received another comment expressing something similar. I also made note of ideas expressed by singers, by tagging the stickies with a lightbulb emoji, and replicating that emoji next to the text above the question header which captured their idea. This allowed me to track suggestions that our singers had for changing aspects of our choir operations, and quickly find their specific wording so that I could share it with leadership.
Closed Card Sorting
I also utilized a closed card sorting technique for analyzing feedback on our concert cycle's repertoire. For this, I used the songs we sang as the "themes" and collected the positive and negative feedback about each of them in order to understand which songs were the most enjoyed by our singers.
Theme Organization
Once I had the themes identified, I copied the stickies and themes that were captured with the question headers, and began theme organization. While it had been useful to split comments apart for sentiment analysis and identifying themes in the data, this then presented challenges with quantifying the themes to see how widespread they were. It could make it appear that there were comments from multiple people on a theme, when sometimes it was just the same person expressing the same thing multiple times across different questions.
In order to combat this potential for bias in data, I took the collection of themes above each question header and used those to "seed" themes within the singer feedback. Not all of the question header themes ended up being used as overarching themes, because this method allowed me to deduplicate themes that appeared across multiple questions. Once I had placed the stickies under their new themes, I looked through the theme-set to find comments within that theme from the same participant, and collapsed their feedback into a single sticky. This ensured that their feedback on that topic was only counted once, allowing the "weight" of their data to be a single vote against other singer's single votes on that topic.
After the stickies were organized within their themes with each sticky note representing one "vote" for the topic, I arranged the themes from the ones with the most comments to the ones with the fewest comments. This provided an accurate representation of how many singers shared a similar perspective on the themed topics, and created a view for leadership that made it easy to see at a glance which themes had the most shared sentiment.
I've provided examples below of my closed and open card sorting. "Music Selection Feedback" was a closed card sort, and both of the "Overall Singer Feedback" examples were open card sorts.
Data Visualization
Once I was finished with my data analysis, I began creating visualizations of the data to share with STANCE leadership in a presentation of my findings. The data visualizations native to Google Forms are not presented in a way that is easily readable because it captures all of the write-in answers, so I created my own visualizations from the data in Google Sheets. To do this, I made copies of each of the survey tabs and proceeded to tidy the answers in a way that would both maintain the integrity of the write-in comments and present a simplified view of answers to the multiple-choice questions.
For example, when a singer answered the multiple choice question "How would you describe your overall experience with STANCE this season?" with a write-in answer of "Some positive and some negative," I coded that answer as "Mixed."
















The pie charts served my basic needs for data visualization, but there was some data where I wanted to show the breakout across different demographic groups. For that, I knew I would need bar and column charts, but I struggled to create them using Google Sheets. I knew approximately what I wanted, but I couldn't figure out how to make it happen with their tools. I put together a Balsamiq sketch of what I was looking to create, and put a call out to my friends to see if anyone might be able to help me. My friend Landry Dugan (a local data scientist) was generous enough to help! He provided me an example of a pivot table, which I was then able to dissect a bit and start making my own. This took a little bit of fiddling, but eventually I was able to understand the "language" of how pivot tables work, and started creating them on my own.






















Final Presentation
I created the final presentation of singer feedback using Google Slides, and presented it first to the STANCE Board of Directors, and then to the singers a few days later at the Annual Meeting. I have also created a slide deck for the staff performance feedback, though I am not sharing that publicly. The slide template is one that I put together for STANCE as part of the branding work that I have done for them in the past two years.
View the Google Slides presentation (or click through it below)
Next Steps
STANCE is still in the process of absorbing the information contained in my research, but there have been some valuable takeaways already that the organization will be able to act on in the coming months.
-
Clarify STANCE's goals & put them on the website
-
Improve ventilation in singing and performance spaces
-
Practice stage logistics earlier in the concert cycle
-
Run music by section leaders in advance of final programming decisions
-
Create more social events with a lower barrier to entry
-
Recognize volunteers more
-
Address gaps in staffing
-
Create a page on the STANCE website to showcase local voice teachers skilled at training transgender voices
Learnings
In addition to providing valuable insights to an organization that is personally meaningful to me, I delighted in the work and process itself. I loved putting together the project plan, collecting the information, analyzing the feedback, creating themes, and presenting my findings. Working with large amounts of data, and being able to present it in a way that creates broader, shared understanding is very satisfying. I'm also impressed with the timeline in which I was able to complete the initial analysis.
Were I to make any changes, I would probably take a closer look at some of the questions we asked. I now know which questions were the most valuable, and and which questions were confusing to our participants. I would reduce the number of questions by removing those that were less useful, and I would separate the section leaders from the rest of the choir, possibly even having them fill out the same survey as that of the board, because they are working much more closely with leadership than the rest of the choir.