I just heard from a coauthor - we got a paper accepted at the Denver FMA meeting in October. The idea resulted from taking an idea we'd been working on and applying it to another data set we had available.
It's funny - we submitted two papers: this one was an early version, and the other was pretty much finished. However, to be fair, the results on this one were more interesting. And since we'd already gotten one paper on the program, we were actually glad we got the second one rejected - doing two papers at a conference means there's less time for catching up with friends.
This tale of two papers reminds me of a piece I read a while back (unfortunately, I can't recall its title). It discussed how there's a trade-off in research between "newness" and "required rigor". In other words, if you're working on a topic that's been done to death (e.g. capital structure or dividend policy), you'll be asked to do robustness tests out the yazoo. On the other hand, if it's a more novel idea, there's a lower bar on the rigor side, because the "newness" factor gets you some slack on the rigor side. .
In general, however, the "rigor" bar has been ratcheting up for the last 20-30 years, regardless of the "newness" factor. To see this, realize that the average length of a Journal of Finance article in the early 80s was something like l6 pages - now it's more like 30-40. As further (anecdotal) evidence, a friend of mine had a paper published on long-run returns around some types of mergers in the Journal of Banking and Finance about 9 years back. They made him calculate the returns FIVE different ways.
In any event, to make a long story short, I'm hoping we got accepted at FMA because the reviewers though our paper was a good, new idea.
But it's probably because we got lucky.
But either way, we'll take it - see you in Denver!