A/Bracadabra: Testing Magic

esig-focus-group-11-21-08.jpg

This February was the most recent round of focus-group style testing of magic effects and presentations in NYC. This is something I've been a part of for years in different forms. And it has had a truly significant effect on the strength of my material. 

As magicians, we like to lie to ourselves and each other. "People don't suspect the deck in a color changing deck routine. I mean, unless you're a poor performer. I do the trick every night in my restaurant show and no one ever asks to see the deck."

You hear that sort of thing a lot. So one of our earliest tests was to show people a color changing deck routine on video by one of the masters of magic. When asked to offer solutions in regards to how it was done, 100% questioned the make-up of the deck in their written response, either using the phrase "trick deck" or writing something like, "I don't know how it was done, but I feel like I'd know if I could look at the deck."

Just because they don't ask to look at something doesn't mean they don't think it's suspect. They're being nice. You might not ask the stripper, "Hey, can I squeeze dem titties?" But that doesn't mean you think they're real.

As I wrote in an earlier post: If you change one object to another, or tear and restore something, or harmlessly penetrate something, or change the color of something—and if you do these things in a close-up situation—then I would argue that the trick is not complete until the audience has examined the object of the effect at the end. 

In later groups, we would go on to test a color changing deck effect performed in real life for audiences in two ways. The only difference between the two performances was: in the first performance the deck was put in the performer's pocket at the end, and in the second it was handed to someone and they were free to look at it.

We would often present the testing as part of the initial stages of a magic show that was being worked on (and in some cases that was true). So the premise of the whole thing was that we were trying to select material for a show and we were only looking to go forward with the most amazing tricks. So they would see a handful of tricks and they would rate each one on a scale of 1 to 10 in regards to how "amazing" the trick was. (We found it best to phrase it this way for certain tests. If we instead asked, "How amazed were you," we found the scores got compressed to a smaller range. I think people might be unwilling to say something on the extreme ends about themselves or their experience. But if you ask, "How amazing was the trick," the scores covered a broader range.)

Going back to the color-changing deck, those who got to examine the deck at the end rated the trick 60% more amazing than those that didn't. It wasn't even remotely close, even though it was the identical trick. 

This is something of a microcosm for how this focus-group testing evolved. Originally it was pretty much just us showing people tricks and asking, "Do you have any idea how that's done?" It wasn't "testing" so much as it was making bets with my friends on what sorts of things are obvious in magic and what aren't. (I was right about most thread effects being obvious. I was right that any time you palm a card and remove it from your pocket or fly it's 100% obvious. I was wrong about Miraskill being obvious. The one-ahead principle is something I thought was more obvious as well. (It IS obvious to a good portion of people. But it can be salvaged. More on that another day.))

Eventually we moved into a type of A/B testing with effects. We would perform a trick one way for a group of people and then another way for a different group, just changing one thing. Now, because we weren't dealing with 1000s of people, a 5% difference in whatever we were measuring from group to group wasn't that significant. But we were often seeing things that were 25% or 50% or 100% more "amazing" or "enjoyable" for spectators depending on which version they saw. So even with just a few dozen people we could still make some strong conclusions.

And my performances got significantly better once I incorporated those conclusions into my performing styles. 

I'll be writing up some specific concepts we tested in the coming months. Some might seem "obvious" but it can be telling to see just how much of an impact certain things will have on people's enjoyment of a trick. I'll also be taking suggestions for other things to test for our next round which will likely be early next year. (We'd like to do it more frequently, but it's pretty expensive. You bring in 50 people and give them $40 each, so that's $2000, then another few hundred to rent the space. Even split a few ways it's a little bit of an investment.)

I've held off on some of the specifics of this testing in the past because one of the people involved was planning on writing an essay or a book on it. But he recently texted me saying, "I'm a bitch. I'll never get to this. Feel free to write about anything you want." So there will be more on this to come.

Some people have argued with me and said, "Well, I wouldn't put my act in front of some focus-group." They say it like it's some brave artistic choice. But this isn't like some sitcom that's being watered down by a focus-group to appeal to the lowest common denominator. This is using the focus-group as a means to get honest feedback about very practical questions in regards to what people enjoy and are fooled by. And that's pretty scary to some people. But I think it's valuable to learn what the audience is really thinking. The alternative is like saying, "Hey, if I don't get that AIDS test, then I'll never have to hear that I have AIDS!" Like... that's not a helpful way of handling things.