Missed search results is the simplest to approach, though all of the problems are heavily interrelated with user buy-in. We ask experts to pick keywords that represent them for the search. However, it should be obvious that no person will be able to list all keywords that they could reasonably respond to. They typically pick the most common representing their fields of expertise. Users search for specific things, not for the areas of expertise that they want. This would be partially mitigated by an "expert mentoring" system (see "noise in the search results"), where their sponsor could help them pick more terms (and to some extent just making sure they fill out the fields properly, as this is a not-so-slight problem currently). This could be (and should be) better mitigated by creating (preferably dynamically) an "expertise tree" that users could browse when looking for experts. However, this would be more powerfully approached as a semantic problem. If experts could be found based on some correlation coefficient between their keywords and the users' search terms, that would be ideal. The most difficult part of this would be picking the corpus to mine for correlations, and updating it (updating would depend heavily on the corpus chosen).
Noise in the search results is mainly a consequence of people gaming the system. This is a difficult problem to attack head-on; it's best to divide and conquer, turning it into a distributed problem. The most intuitive way to do this is to create a trust system. Two distinct possibilities exist, which can be blended as a third. Users themselves could be used to build this trust system. As they "joined" and gathered more of their friends, and interacted more with the experts, communities of trust would form. The difficulties here lie in *requiring* a user to log in to have any basis for trust, as well as requiring a large userbase to get off the ground. The second possibility is to make experts self-moderating, and possibly even responsible for eachother. This would, in the extreme, completely disallow "naiive" expert generation--an expert candidate would have to find a sponsor to accept them as an expert. They would piggyback the sponsor's trust levels; how they acted would reflect on their sponsor. Experts would only be able to sponsor other experts if their own trust-rating were above a certain level, and then even possibly only a certain number. I think the expert trust-web is a stronger proposition. The user trust-web seems more gameable, though if trust recommendation were limited to directly connected communities, and not actual reflections on the experts in general, then it seems that any possibly contamination would be quarantined. Trust metrics, and other expert rating metrics would need to be determined.
As for user buy-in, it seems reasonable that if we can provide valid search results to most queries, we will have a dedicated user base. The question becomes how to recruit and keep experts. The system needs to be easy and enjoyable for them to use. The current chat server/client implementation is far from ideal. Switching to a community-developed platform such as jabber seems like it would increase ease-of-use and decrease both user and expert frustration. It would give experts a larger selection of clients to use that are more configurable and more reliable. A further step that could be attempted is to use jabber to link to whatever IM client/protocol they are already using. This has its own issues which should be discussed elsewhere; I believe the benefits do outweigh the issues in the long run. An additional option is, once users are *allowed* to register, allow experts to only accept queries from registered users with positive trust ratings. This should decrease random abuse against experts. It would also decrease the "random user"'s search results, which would likely slow adoption by users. Here, I think making the system less abusable strongly outweighs the deceleration of ramping up.